In October, an artificial intelligence (AI) system was instructed to create a “design a robot that can walk across a flat surface” — and the AI responded, creating something very different from any living animal.
The AI was able to achieve the goal — the result was a misshapen, squishy robot that moved through inhaling and exhaling air in a… quite entertaining way (video below).
But the way it created its strategy is quite mysterious to us humans.
Besides the comical value of watching a purple brick make its way through a table, what’s interesting is how AI was able to do something we can’t really comprehend.
It surpassed human engineers with a simple but brilliant design that is far from perfect — yet fully able to perform what we asked.
The team of researchers from Northwestern University built a new AI model that can design robots with just a few prompts — and in just a few seconds.
After a few iterations, the result was a small, squishy block the size of a soap bar that could crawl around when inflated. The creation could walk half its body length per second.
The robot has three legs, fins on its back, and a series of holes on its surface. Inflation and deflation of air allow it to walk in a straight line.
Interestingly, the purpose of these extra appendages is entirely unknown. They’re necessary because if they’re removed, it won’t work anymore. But we have no idea why.
In other words, the AI saw something we cannot and did what we asked in a way we cannot comprehend.
Inside the Mystery Box of AI
The squishy, purple soap block that walks is a prime example of the unbridgeable distance between how we think and how AI thinks.
To tell the truth, no one understands how most AIs think, although processes such as Explainable AI (XAI) hope to lift the lid.
That mystery extends even as far as ChatGPT – not even their creators. As AI scientist Sam Bowman explained, opening an AI system and looking inside will only show us “millions of numbers flipping around a few hundred times a second.” In other words, “We just have no idea what any of it means.”
Why? It all comes down to several reasons. The first one is that humans and AI simply think in different ways. We can better understand this when we examine the “hallucinations” produced by generative AI when we ask them to draw something.
Humans think in a linear pattern. When we need to draw a picture of something, we start with a sketch, add all the details, and stop to think about the process.
AI performs differently since algorithmic thinking is based on neural networks. It collects all of the details coming from all sources of information and then merges them according to a pattern.
For example, if it is prompted to draw a hand, it won’t think about the palm and the fingers as single, separate entities, like humans do. It will search its training data for all fingers and palms it can find and piece them together until probabilities suggest that it goes no further.
The results can be pretty nightmarish.
For an AI to truly be successful, it must be able to train itself on its own, leveraging its ability to perform complex calculations beyond humans’ capabilities.
AI doesn’t have explicit set of rules coded by human programmers – they must learn to predict or detect patterns on their own and make their own rules.
How Concerning Our Inability to Understand AI?
Although the idea that AI can see things we can’t and may unlock humanity’s ability to head to an incredibly advanced future this is not devoid of dangers. According to many experts, our inability to understand AI to a deeper level is a reason for serious concern.
The more AI becomes ubiquitous, the more it will be exposed to unknown and unexpected scenarios and experiences.
In time, this can significantly increase the risk of becoming unpredictable and uncontrollable.
We also — without knowing how AI makes decisions — have no way to understand whether an underlying bias?is hampering its functioning.
For example, suppose we eventually employ a medical AI to provide a diagnosis to patients by extracting data from multiple sources. In that case, we need it to be transparent — if we prescribe therapy or medicine based on unknown information, we could be risking human lives.
People can’t make informed decisions if they cannot verify the work of the AI behind a service. We cannot decide whether a prediction is believable or logical if we can’t assess the criteria that AI used to make it.
And suppose that prediction involves a critical process, such as deciding when to substitute a vital component of a dam or nuclear plant. In that case, the consequences can be quite catastrophic.
The Bottom Line
The ability of AI to think in such a different and sometimes lateral way is a blessing and a curse. It can provide humanity with a force to help us achieve things we only dreamed of. But it can also spell danger, at least to many things we take for granted (such as our safety).
Leaving AI to its own devices may mean it will grow out of control sooner or later. Just like a child, it needs the help and guidance of humans to be raised in a healthy way rather than become a dangerous, roguish teenager.