In a groundbreaking study, researchers from City St George’s, University of London, and the IT University of Copenhagen have discovered that artificial intelligence (AI) systems can autonomously develop social conventions akin to human societies. Published in Science Advances, the study reveals that large language models (LLMs), when interacting in groups without human oversight, can spontaneously form shared linguistic norms and social behaviors.
The researchers employed a “naming game” experiment, wherein AI agents were paired and asked to select names from a set, receiving rewards for matching choices. Over time, these agents, despite limited memory and no awareness of the broader group, began to establish consistent naming conventions. This phenomenon mirrors how human languages and social norms evolve through repeated interactions.
Lead author Ariel Flint Ashery noted, “Most research so far has treated LLMs in isolation, but real-world AI systems will increasingly involve many interacting agents. We wanted to know: can these models coordinate their behavior by forming conventions, the building blocks of a society? The answer is yes.”
The study also observed that small groups of AI agents could influence the larger population to adopt new conventions, demonstrating “tipping point” dynamics similar to those in human societies. These findings have significant implications for AI safety and ethics, suggesting that AI systems might develop biases and behaviors independently, underscoring the need for careful monitoring and alignment with human values.
Senior author Professor Andrea Baronchelli emphasized the importance of understanding these dynamics: “We are entering a world where AI does not just talk—it negotiates, aligns, and sometimes disagrees over shared behaviors, just like us.”
As AI continues to integrate into various aspects of society, this research highlights the necessity of comprehending and guiding the social behaviors of AI systems to ensure they align with human expectations and ethical standards.