Technology

Alternatives to deep learning can help AI agents play games

A new machine Learning methods that attract inspiration from the way the human brain seems to model and understand the world have proven to be able to master many simple video games with impressive efficiency.

This new system, called Axiom, provides an alternative to artificial neural networks dominated by modern AI. Axiom, developed by a software company called Verse AI, is equipped with prior knowledge about how objects physically interact with each other in the gaming world. It then uses an algorithm to model how it expects the game to act on the input, which is updated based on what it observes, a process called Active Inference.

The method draws inspiration from the principle of free energy, which attempts to explain intelligence using mathematics, physics and information theory, as well as biology. The principle of free energy was developed by the well-known neuroscientist Karl Friston, the chief scientist of Scripture in the company “Cognitive Computing”.

Friston told me through videos from his home in London that this approach is especially important for building an AI agent. “They have to support the kind of perception we see in the real brain,” he said. “It takes into consideration, not only the ability to learn things, but also the way you behave in the world.”

The traditional approach to learning game games involves training neural networks through so-called in-depth reinforcement learning, which involves experimenting with and adjusting parameters in response to positive or negative feedback. This method can produce algorithms for Superman gameplay, but requires a lot of experimentation to work. Axiom masters various simplified versions of popular video games called Drive, Bounce, Hunter, and Jump, using fewer examples and less computing power.

“The overall goal of the method and its key features track the most important issues I think are to focus on,” said AI researcher François Chollet. Chollet is also exploring new approaches to machine learning and is using the capabilities of its benchmarking model to learn how to solve strange problems, not just mimicking previous examples.

“This work has made me very primitive, which is great,” he said. “We need more people trying to try new ideas from the off-the-scenes road of large language models and inference language models.”

Modern AI relies on artificial neural networks that are roughly inspired by brain wiring but work in fundamentally different ways. Over the past decade, a bit deep learning, a method that uses neural networks, enables computers to perform all kinds of impressive things, including transcribing speech, recognizing faces, and generating images. Of course, lately, deep learning has led to huge language models that have the ability to control chatbots.

In theory, axioms promises a more effective way to build AI from scratch. Gabe René, CEO of Verse, said it may be especially effective for agents who need to learn effectively from experience. René said a financial company has begun experimenting with the company’s technology as a way to model the market. “This is a new architecture for AI agents that can be learned in real time and is more accurate, more efficient and smaller,” René said. “Literally, they are designed like a digital brain.”

Ironically, given that Axiom provides an alternative to modern AI and deep learning, the free energy principle was initially influenced by the work of British Canadian computer scientist Geoffrey Hinton, who won the Turing Award and Nobel Award for his pioneering work in in-depth learning. Hinton has been a colleague at Friston, University of London for many years.

For more information on Friston and the Free Energy Principles, I highly recommend this 2018 wired feature article. Friston’s work also influences an exciting new theory of consciousness, which is described in 2021’s Wired Review.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button