Can AI Learn to Talk Like Us? A New Breakthrough Says Yes.

Introducing to you weekly AI Research Unit deepdives, where we give you a two minute summary of AI research that is making waves right now. 

Can AI Learn to Talk Like Us? A New Breakthrough Says Yes.

If an AI could invent a language, what would it sound like? Eerie robotic beeping? Repetitive keywords? Or something very much like human language? 

This could have big implications for the agentic future, in which AIs will be talking to AIs more and more frequently.

How AI Agents Are Learning to Talk Like Humans


 Exploring a new method that teaches artificial intelligence to communicate more naturally

Whether you’re a human or an AI communication is always a compromise between accuracy and efficiency. If you ask me how to get to the bus station, I could yell “go west young man” or describe every minute aspect of the route - both would be correct answers but neither is very helpful. I need to give you just enough information to find your way but not so much that you get confused or waste time memorising pointless information.

In this experiment the researchers wanted to work out what was the “minimal viable language” for communicating information. They gave two AIs that did not speak the same language the task of trying to understand one another’s communications. So, for example, if Agent 1 is given a variety of different sized and coloured shapes and labels the red ones “fizz” and the blue ones “buzz”, Agent 2 can look at the objects and use inductive reasoning to work out that “fizz” must mean red and “buzz” blue.

The researchers tested the AI in three areas:

  1. Color Naming: The AI created color names almost as naturally and varied as humans do.

  2. Open Conversation: When trained with real-world language data, the AI’s word choices aligned closely with English words.

  3. Navigation: In a simple 2D environment, the AI learned to give clear instructions while keeping messages short and efficient.

The researchers created a system called ICEC (Information-Constrained Emergent Communication) that teaches AI not just to "talk," but to do it in a way that’s simple, useful, and meaningful.

Here’s what they did:

  • Balanced Communication: The AI learned to send messages that are informative but not overly complicated – similar to how people naturally simplify conversations to get points across clearly.

  • Smart Learning Method: They introduced a new technique called VQ-VIB, which helps AI use simple, meaningful "words" that fit into a bigger world of ideas, just like human language.

Key breakthroughs:

  • Closer to Human Speech: When tasked with naming colors or describing new situations, the AI's "language" resembled how humans actually name things or explain unfamiliar topics. It seems like human language is pretty efficient - probably a result of the centuries of evolutionary pressure it has undergone.

  • Better at New Tasks: These AI agents weren’t just good at familiar tasks – they also handled brand-new situations better because they focused on delivering useful information without unnecessary complexity.

Why it matters:
While AI models can handle much bigger vocabularies than the average human, and can even communicate with perfect mathematical precision by passing vectors directly, this may not be the most efficient option. The team’s experiments suggest that effective communication follows universal rules, whether the speaker is human or a machine.

The team has made their code and data freely available, and they plan to keep exploring how things like social behavior and complex thinking ("Theory of Mind") can help AI communication evolve even further.

Let us know if you'd like a deeper dive into AI research or want to follow this story as it unfolds.
Build the Future of AI With Us!

Join our community of innovators shaping the next era of open-source intelligence.

Follow us on X for updates, breakthroughs, and dev drops.

This isn’t just open-source — it’s open potential.