Company Spotlights / Market Commentary

Active Inference: Humanity’s Final Great Invention

  • Declan O’Flaherty

    Declan holds a Bachelor of Commerce from the University of Alberta and has over 4 years of experience investing in financial markets. As a fundamental investor, Declan embraces the investment principles of Warren Buffett and his disciples. This puts a focus on finding businesses with healthy financials, competent and accountable leader, enduring competitive advantages, and those that are selling at discount to what they are worth.

    View all posts

Edge is publishing this content on behalf of the Company and is compensated for investor relations services

Verses AI (CA: VERS.NE) (USA: VRSSF) is disrupting the most disruptive technology in the world. It transforms artificial intelligence from something that mimics knowledge to one that formulates ideas and fosters curiosity on its own. This is causing a paradigm shift in the industry and leading many to wonder if this is the true path to Artificial General Intelligence, a.k.a. AGI.

But understanding this technology isn’t easy. There is an absurd amount of nuance, which can be discouraging when trying to make sense of it. This article attempts to distill that.

Because whether you like it or not, AI, and shortly AGI thereafter, is going to alter the fabric of reality. You can either embrace it, or, you will quickly fall behind. The choice is yours.

The Letter that Put the World on Notice

It has been two weeks since Verses published an open letter to OpenAI in the New York Times. In it, Verses CEO Gabriel René outlines how OpenAI, and other major players, are struggling to produce AGI that is “adaptable, safe [and] sustainable.” He then goes on to highlight OpenAI’s Charter which states: “…if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.” René closes the section by expressing that: “VERSES qualifies for [its] assistance.”

Bold words for a company 100 times smaller than the creators of GPT-4.

But the letter is not merely meant to garner attention. Active Inference is real. It has backing. People and organizations, like Verses, are using it. And it is shaping up to be our best chance at creating human-level intelligence, or greater, that works with us, not against us.

So why haven’t OpenAI, and other leading AI players taken notice? My guess is that change can be slow and it is not easy to admit defeat. These organizations have a lot riding on Generative AI technology, and to be fair, they have made significant progress in recent years.

Moreover, nearly all computer scientists, software engineers, big data architects, and the like, are disciples of the “Godfather of AI”, Geoffrey Hinton—Hinton is considered a leading figure in deep learning and the creator of artificial neural networks. For them to embrace a new approach, such as Active Inference, it will take a significant event to change their minds; they may need to see-it-to-believe.

However, the sooner they do, the greater the benefit Active Inference AI will have for us all. If OpenAI decides to accept Verses’ invitation, there is no doubt that it will set a new precedent for AGI collaboration. With the brightest minds, on both sides, working together, they can create machines that propel our civilization beyond what is comprehensible.

But, let’s not get ahead of ourselves. For now, there remains a divide between the Generative AI and Active Inference AI communities. Therefore, if you want to understand the essence of AI, you should familiarize yourself with both approaches. Each offers its advantages, though it is clear that one path is closer to reaching the ultimate goal than the other. Let’s dive in.

The Inherent Problem with Generative AI

Generative AI is an evolution of the deep learning framework, pioneered by Geoffrey Hinton. It uses artificial neural networks to create connections between billions, and sometimes trillions of data points. This enables the AI to realize patterns and formulate predictions which is how chatbots like Chat-GPT can write entire essays with just a few sentences.

To create these Large Language Models (LLMs), developers upload massive datasets into the AI and program it to find connections within the database. Then, using deep learning and reinforcement techniques, the AI begins to derive patterns and relationships between the data points which can then be used to make predictions. Eventually, the AI becomes highly adept at solving these computations and begins generating outputs based on the prompts you give it.

But this is where the problems start to manifest.

For one, LLMs are inherently biased. Depending on what data you feed it, the AI will spit out responses that seem skewed one way or another. For example, when prompting the image generator Midjourney with “Tech CEO Skydiving in Egypt,” it produces four images of white guys skydiving in the desert. Why? Because the majority of data Midjourney is likely trained on contains “Tech CEOs” who are white. This is just a fun example, but you can see how this could be quite problematic when assessing someone’s credit score, job skills, and the like.

Taking it a step further, LLMs struggle to differentiate what they know and do not know. If you ask a chatbot a question it doesn’t have the answer to, it will produce a response regardless of whether it is accurate or not. That is because the AI assigns a “relationship score” to every data point it analyzes. This score falls between 0.0 and 1.0 but never reaches 0.0 or 1.0, exactly. This causes the AI to lie since every response it generates is a “best guess” rather than the truth. I don’t know about you, but I have a hard time trusting a machine or human that isn’t 100% honest when sharing information.

Moving on, another limitation of LLMs is that they do not learn in real-time and fail to interoperate with other agents and IoT devices. If you go onto ChatGPT 3.5 right now, you’ll see that its last knowledge update was in January 2022. This means that any information beyond January 2022 does not exist within the LLM’s database. To resolve this lack of knowledge, OpenAI’s scientists must upload and train ChatGPT with new data. This can be quite time-consuming and costly depending on how much data is required. For example, it cost OpenAI over $100 million to train GPT-4 alone.

But that’s not all. In addition to costly updates, these LLMs only have access to their personal databases. This means that they are unable to communicate with other LLMs and are privy only to the information they store. If we want to eventually create autonomous AI that operates independently of humans, we must have agents that can adapt in real-time and share knowledge/information as it unfolds. Otherwise, we will be stuck with AI that is impressive but lacks the ingenuity to act decisively and instinctively on its own.

Which brings me to my last point. AI must be explainable. If we are going to trust machines with our intelligence and livelihoods, we must ensure that it is acting in our best interests at all times. Not only that but we need to be able to correct it when it does not. Generative AI is not capable of this, nor can we communicate our values effectively to it. LLMs simply sort through trillions of data points, perform millions of calculations, and then produce an output.

The best example of this is the AlphaGo project created by Google DeepMind. AlphaGo was trained to play the ancient game Go, and eventually beat the top player Lee Sedol four games to one. If you study each game, you will see that AlphaGo played in ways no human has ever played. But, if you tried to understand how it executed each move, you would need to sort through millions of calculations each time. This would take years to complete all five games making it nearly impossible to comprehend.

The same goes for Generative AI and deep learning models in general. To correct even one bad output would be exhaustive. Therefore, it is highly risky and unlikely that we allow autonomous machines to act independently without the ability to prevent or correct poor decision-making. A new solution is required.

Ironically, one such solution exists. But until the teams at OpenAI, Microsoft, Google, Meta, etc., realize that building bigger and faster LLMs is not the answer, Active Inference will be reserved for those who are willing to think outside the box. Once they do figure it out, those early adopters will be years ahead of the competition. Allow me to explain what I mean…

What Makes Active Inference AI So Special?

Imagine that you are a newborn baby once again. You are welcomed to the world for the first time and unsure of what to make of it. Despite this lack of understanding, your mind immediately knows how to pump your heart, fill your lungs, and feel your mother’s touch, among other things. Though an amazing feat already, this is only the beginning of what you are capable of.

Over time, as you grow, you will make more and more sense of this world. As you do, things will become more familiar to you and it will require less energy and focus (e.g. learning how to walk). Eventually, many of those early obstacles and uncertainties will become second nature. This will allow you to focus on more important things and explore subjects that spark your curiosity.

But the beautiful thing is that you do not need to learn everything on your own. Instead, many humans before you have spent time studying these subjects and documented their findings along the way. This makes it easier for you to learn faster because you can simply piggyback off them rather than needing to remember it all. Better yet, you can form new connections that unlock discoveries that once seemed impossible. With this combination of distributed knowledge and personal experiences, you and the rest of humanity will create a world that grows exponentially better over time.

That is the essence of Active Inference AI. It learns like a human, only more efficiently and with less emotion. Invented by Dr. Karl J. Friston, Chief Scientist at Verses AI, Active Inference is a cognitive approach to AI that models how living things come to make sense of a highly stimulating and unpredictable world.

At the root of this approach is the Free Energy Principle which explains how things that exist seek to minimize their free energy by reducing the uncertainty around them. To do so, the entity (e.g. human, AI, etc.), using minimal training data, makes predictions about the uncertainty it is trying to resolve, and updates its belief as new information comes about. Once it reaches an understanding, it can focus on new problems again and again, leading to greater improvements along the way.

The cool thing is that it does not need to focus on or understand everything at once to execute these tasks or overcome problems. Instead, Active Inference agents only indulge in the information that they require; compared to LLMs that must tap into their database and make millions of calculations every single time. This explains how humans can perform complex tasks like driving a car safely, despite hundreds of vehicles, humans, signs, and lights clouding their environment; even when their senses are overwhelmed, they manage to hone in on the main problem/task at hand.

Now imagine an AI that can do all of this and more, while requiring significantly less computing power and energy. That is what Verses is creating through Genius. It is an Active Inference agent that can formulate predictions and update its beliefs in real time. This allows it to learn about its context as things evolve and achieve optimal outcomes by actively testing multiple approaches at once. It also means that the AI won’t be plagued by bias since it doesn’t have a static framework, nor will it lie/hallucinate since it acknowledges when things are unknown.

But this is only the beginning. To create truly intelligent agents that encapsulate the best qualities of nature, they must also be able to share knowledge and work together safely and ethically. This is where the Spatial Web Standards, created by Verses, donated to the Spatial Web Foundation, and being implemented by the IEEE(the organization behind the standardizing of Wi-Fi and Bluetooth) in 2024 come into play. With the HSML, HSTP, and UDG standards, developers can create AI agents that communicate and cooperate simultaneously using a shared database and common language. This language is easily explainable and auditable making it accessible and comprehensible to humans and machines alike. Most importantly, this enables humans to govern and regulate AI agents by establishing our beliefs and values within the modeling language. With these standards in place, it guarantees that AI and AGI work with us, not against us.

So, given all of the favorable characteristics Active Inference has to offer, organizations are taking notice of the company behind its creation. Already, Verses has worked, or is working with the European Union; NRI Distribution; Blue Yonder; 686 Apparel; SVT Robotics; Simwell; Dentons; a Top 10 Fortune 100 National US Pharmacy Retailer; Nalantis; Cortical Labs; NASA; and Volvo. These applications span everything from autonomous drones and vehicles to space industry standards, supply chain management, digital twin simulations, and more. And with five additional members participating in the Genius Beta Program, you can expect even more partnership announcements and applications shortly.

The exciting thing is that this all occurred within Genius’ first year of development. As the Beta Program reaches a conclusion and Genius is made available to the broader public in 2024, Active Inference agents will become even more sophisticated with no limitations in sight. If it continues on its current trajectory, and based on Verses’ projections, AGI may be attainable in just a couple of years. Imagine, for the first time in human history something more intelligent than us. Agents, modeled by Active Inference, that are capable of unlocking the universe’s deepest secrets and propelling civilization further than any human can. That is Genius. That is what Verses is developing.

The Bottom Line

There is a lot to digest when it comes to artificial general intelligence. Whether it’s Large Language Models, Deep Learning, Generative AI, Active Inference, or the Free Energy Principle, there is much to understand if you wish to make sense of it all. But don’t be discouraged. AGI is a wonderful creation, and when made to work with humans, not against them, it can help us create the future we always imagined. Therefore, take the time to understand these concepts, and familiarize yourself with the different approaches, and you will have the knowledge necessary to capitalize on the most incredible technologies humankind has to offer.


We are not brokers, investment, or financial advisers; you should not rely on the information herein as investment advice. If you are seeking personalized investment advice, please contact a qualified and registered broker, investment adviser, or financial adviser. You should not make any investment decisions based on our communications. Our stock profiles are intended to highlight certain companies for YOUR further investigation; they are NOT recommendations. The securities issued by the companies we profile should be considered high risk and, if you do invest, you may lose your entire investment. Edge Investments and its owners currently hold shares in Verses AI stock and are compensated by Verses for Investor Relations Services. Edge Investments and its owners reserve the right to buy and sell shares in Verses AI without further notice, which may impact the share price. Please do your research before investing, including reading the companies’ public filings, press releases, and risk disclosures. The company provided information in this profile, extracted from public filings, company websites, and other publicly available sources. We believe the sources and information are accurate and reliable but we cannot guarantee it. The commentary and opinions in this article are our own, so please do your research.

Copyright © 2023 Edge Investments, All rights reserved.

  • Declan O’Flaherty

    Declan holds a Bachelor of Commerce from the University of Alberta and has over 4 years of experience investing in financial markets. As a fundamental investor, Declan embraces the investment principles of Warren Buffett and his disciples. This puts a focus on finding businesses with healthy financials, competent and accountable leader, enduring competitive advantages, and those that are selling at discount to what they are worth.

    View all posts

Leave a Comment

Get 30+ hours of analyst research directly in your inbox weekly. Sign-up today to stay on top of the market.