Venture

One day, communicating with aliens might become simpler if we examine how AI agents interact with one another.

Communicating with aliens one day could be much easier if we study the way AI agents speak with each other_674b6c6b6fa51.jpeg

In the 2016 science fiction movie Arrival, a linguist is faced with the daunting task of deciphering an alien language consisting of palindromic phrases, which read the same backwards as they do forwards, written with circular symbols. As she discovers various clues, different nations around the world interpret the messages differently — with some assuming they convey a threat.

If humanity ended up in such a situation today, our best bet may be to turn to research uncovering how artificial intelligence (AI) develops languages.

But what exactly defines a language? Most of us use at least one to communicate with people around us, but how did it come about? Linguists have been pondering this very question for decades, yet there is no easy way to find out how language evolved.

Language is ephemeral, it leaves no examinable trace in the fossil records. Unlike bones, we can’t dig up ancient languages to study how they developed over time.

While we may be unable to study the true evolution of human language, perhaps a simulation could provide some insights. That’s where AI comes in — a fascinating field of research called emergent communication, which I have spent the last three years studying.

To simulate how language may evolve, we give agents (AIs) simple tasks that require communication, like a game where one robot must guide another to a specific location on a grid without showing it a map. We provide (almost) no restrictions on what they can say or how — we simply give them the task and let them solve it however they want.

Because solving these tasks requires the agents to communicate with each other, we can study how their communication evolves over time to get an idea of how language might evolve.

Related: Father-daughter team decodes ‘alien signal’ from Mars that stumped the world for a year

Similar experiments have been done with humans. Imagine you, an English speaker, are paired with a non-English speaker. Your task is to instruct your partner to pick up a green cube from an assortment of objects on a table.

You might try to gesture a cube shape with your hands and point at grass outside the window to indicate the color green. Over time you’d develop a sort of proto-language together. Maybe you’d create specific gestures or symbols for “cube” and “green”. Through repeated interactions, these improvised signals would become more refined and consistent, forming a basic communication system.

This works similarly for AI. Through trial and error, they learn to communicate about objects they see, and their conversation partners learn to understand them.

But how do we know what they’re talking about? If they only develop this language with their artificial conversation partner and not with us, how do we know what each word means? After all, a specific word could mean “green”, “cube”, or worse — both. This challenge of interpretation is a key part of my research.

Cracking the code

The task of understanding AI language may seem almost impossible at first. If I tried speaking Polish (my mother tongue) to a collaborator who only speaks English, we couldn’t understand each other or even know where each word begins and ends.

The challenge with AI languages is even greater, as they might organise information in ways completely foreign to human linguistic patterns.

Fortunately, linguists have developed sophisticated tools using information theory to interpret unknown languages.

Just as archaeologists piece together ancient languages from fragments, we use patterns in AI conversations to understand their linguistic structure. Sometimes we find surprising similarities to human languages, and other times we discover entirely novel ways of communication.

An illustration in a pixelated style mimicking the hands in Michelangelo's creation of Adam

AI develop their own languages. (Image credit: cybermagician via Shutterestock)

These tools help us peek into the “black box” of AI communication, revealing how artificial agents develop their own unique ways of sharing information.

My recent work focuses on using what the agents see and say to interpret their language. Imagine having a transcript of a conversation in a language unknown to you, along with what each speaker was looking at. We can match patterns in the transcript to objects in the participant’s field of vision, building statistical connections between words and objects.

For example, perhaps the phrase “yayo” coincides with a bird flying past — we could guess that “yayo” is the speaker’s word for “bird”. Through careful analysis of these patterns, we can begin to decode the meaning behind the communication.

In the latest paper by me and my colleagues, to appear in the conference proceedings of Neural Information Processing Systems (NeurIPS), we show that such methods can be used to reverse-engineer at least parts of the AIs’ language and syntax, giving us insights into how they might structure communication.

Aliens and autonomous systems

How does this connect to aliens? The methods we’re developing for understanding AI languages could help us decipher any future alien communications.

If we are able to obtain some written alien text together with some context (such as visual information relating to the text), we could apply the same statistical tools to analyze them. The approaches we’re developing today could be useful tools in the future study of alien languages, known as xenolinguistics.

But we don’t need to find extraterrestrials to benefit from this research. There are numerous applications, from improving language models like ChatGPT or Claude to improving communication between autonomous vehicles or drones.

By decoding emergent languages, we can make future technology easier to understand. Whether it’s knowing how self-driving cars coordinate their movements or how AI systems make decisions, we’re not just creating intelligent systems — we’re learning to understand them.

This edited article is republished from The Conversation under a Creative Commons license. Read the original article.

 

About Author

You may also like

Venture

The Hera probe captures its first images of Earth and the moon as it heads toward the asteroid impact location.

  • October 17, 2024
Hera probe snaps its 1st images of Earth and moon on way to asteroid crash site (Image Credit: Space.com) The
Venture

China unveils its ambitious plans for space exploration, including missions to the moon, Mars, asteroids, and Jupiter.

  • October 17, 2024
MILAN — After collecting the first-ever samples from the far side of the moon, China is now setting its sights on