VERSES AI Inc. (NEO:VERS)
Canada flag Canada · Delayed Price · Currency is CAD
1.080
-0.040 (-3.57%)
May 1, 2026, 3:37 PM EST
← View all transcripts

Status Update

Jul 29, 2025

James Hendrickson
President and COO, VERSES

With me is Gabriel René, our Co-Founder and CEO. We are all happy that you are able to be here and to take the time out of your day. We're going to go for about 30 minutes today, and what we are going to talk about is two major things. We are going to start by talking about a little bit of an update on our uplist plans and the work that we've been doing on that front, and then we will transition over to an update on our corporate strategy that Gabriel will give, some of the work that we're doing on the technology front and how these pieces are coming together in a really interesting and coherent story.

We're going to start by talking a little bit about our plans that I think most of you are aware of and have been aware of over the last several months. I think it's kind of no secret that we were looking for an uplist to a major Tier 1 U.S. exchange. The reasons for that are listed on the screen. There's probably more, and many of you probably know these reasons better than me. Essentially, it's access to larger pools of capital, generally greater liquidity, more tools available for us to raise capital, and ultimately to be on an exchange that understands what we're doing and better aligns with our business structure and business model. You're probably all aware that exchanges such as the Nasdaq or New York Stock Exchange have several requirements that we continue to work on satisfying.

One of those requirements is a reverse stock split that is required often in order to be able to reach a minimum share price that is available to list on those exchanges. We have taken on the advice of our team, our attorneys, our bankers, our finance team, the steps over the last several months in order to be able to satisfy those requirements. One of the other things that has happened during that time is that we became a, we moved more into being a SEC reporting company. During this time, we moved from being a foreign private issuer in the United States or to a Canadian listed company into being a domestic SEC reporting company.

That comes with different requirements for what we're able to share publicly, and many of the things that we wanted to share or have historically been able to share, we were not sure we could share, essentially as we were working through what was permissible during the various times that we were attempting to do financing and doing these uplists. We were unable to be as communicative to you as we would normally like to be. That, combined with all of the activities that we took in order to be able to move to a Tier 1 exchange, I know has been very, very hard for you. We know how much our shareholders, our fans, our investors believe in what we're doing, and we're so deeply, deeply grateful for that.

The fact that we were going through this difficult time, these challenging times, combined with a series of kind of macroeconomic events that were challenging, we know that these challenging times were very, very hard for you and for our shareholders. It's been hard for us as well. We are here to tell you first, thank you for being with us through that time. We know it's been hard, and we are so grateful that you've had the vision to stay with us and the foresight to be able to follow us and continue being our champions during this time. It's been so, so good to have such strong backers from the shareholders, investors, and just the general fans of the work that we're doing here at VERSES that it's encouraging every single day, even when some of these things have been hard.

We're not through with our plans to complete an uplist. There are a number of things that we are still working on, that we are working with our lawyers and our finance team, our bankers, to make sure that we are well positioned for those next phases in that process. Everything that we've done over the last, let's say, two quarters has been in support of this move toward Nasdaq or to New York Stock Exchange, so a Tier 1 U.S. exchange. The good news is that out of these things, out of these challenges, come opportunities that make us stronger. One of the things that's been really exciting for us as Gabriel René and I and our finance team have been focused outwardly on moving onto these stronger capital markets has been that the team has been able to do extraordinary work during that time.

We would like to be able to talk to you about some of that extraordinary work. Some of it we've already shared in press releases and in newsletters. Grateful for you subscribing to those newsletters. Hopefully, those have been helpful. As we move into this next section, we want you to be able to see all of the exciting things that we're doing. I think one of the things that I'm looking forward to hearing from Gabriel is how the pieces that we have announced, some of them bigger and some of them smaller, all fit together and tell a very powerful and coherent story. While these last few months have been challenging, the work that we've been doing, we've never really been stronger than we are right now. This work is so exciting, and I cannot wait for you to see it.

With that, I'm going to pass it over to Gabriel. Gabriel, thank you for being here, and we'd love to hear your view on how these things come together. Uh-oh, I think we don't have your audio, Gabriel.

Gabriel René
Co-Founder and CEO, VERSES

How about now?

James Hendrickson
President and COO, VERSES

Perfect.

Gabriel René
Co-Founder and CEO, VERSES

Okay. Good to see you, James. Thank you everyone for attending. You know, I'm reminded of a scene in Apollo 13 where there's this moment where, you know, Lovell and the team are trying to reenter the Earth, and it's been too long. The amount of time that they expected them to come back online after coming back from the moon had been too long. Ed Harris's character is kind of standing there, and everyone's counting the seconds. One of the PR guys to the right says, you know, this is, this is a disaster. He's concerned about the optics. Ed tightens up his tie, and he says, you know, I believe this is, this will be our finest moment. Actually, that's what I believe. I believe this is our finest moment. The markets have a particular set of challenges that are separate from the challenges that the company faces.

We have to take on all of these challenges. The one that we've, I think, demonstrated over the last 60 to 80 days with respect to the technical challenges that we've been working through, I think gives us the right foundation to address the market challenges as well. As such, you know, as the Chinese like to say, every crisis is an opportunity. We believe that the opportunity that we are perfectly positioned for is that most of the world is investing in AI, but AI is largely failing to deliver what we can think of as real-world intelligence, right? Current AI, a lot of impressive demos, some amazing capabilities. We've seen this with the large language models. We've seen it with some of the humanoid robots. We've seen it from, you know, autonomous vehicles. In every single case, there is this essentially what you might call the hallucination problem.

All of these AIs struggle with trying to actually address real-world challenges. It's in part because they have to try to learn all of the data and all of the options for what kind of actions they might take at once. This is the whole notion behind big data, you know, big tech, big chips, big energy, everything is required. The result is essentially chatbots that hallucinate and make things up, still fail at some basic math tests while doing exceptionally well in some PhD tests. You know, autonomous vehicles that routinely break for plastic bags. We just saw a chatbot from Claude last, a couple of weeks ago. They gave it $1,000 and said, you know, could you run a simple sort of business inside of our shop here, you know, buying candy bars? It started buying tungsten metal and basically lost all the money.

This ability to reason and be applicable to real-world intelligence continues to be a challenge. We see robots dancing and doing flips, but they can't wash the dishes. They can't set a table. This is partially why we've seen this slow enterprise adoption. The real-world challenges are not just language-based. They're enterprise and industrial. This is where VERSES has decided to focus, or zagged where everyone else has zigged, if you will. What I think we have now achieved is we've demonstrated that we have a much stronger foundation for reliable real-world intelligence. It's designed to be enterprise-ready from day one. It's designed to be domain applicable, meaning it trains on the data that the customers want in order to solve mission-critical applications. It's useful for real-world deployments.

Things we've talked about, from being able to route multiple taxis across the city more effectively, being able to sense and model the world in real time, which we're going to touch on briefly, and even the ability to take robots to the next level. I believe VERSES is actually positioned to solve this last-mile problem that we see across the entire spectrum of AI, Internet of Things, and robotics. Frankly, that's been the plan the whole time. In the last few months, I think we've executed all of those key pieces for that kind of success. That's why I'm so excited about what comes next. You want to go to the next slide there? A little bit of where we've been. 2024 was really a setup year. We started with multiple releases of Genius, working with different beta customers. For each test that we did, we learned something new.

We improved the product. The goal of this was to get to the version that we could have commercial clients using it by themselves. In 2025, in May was when we crossed, or sorry, April is when we crossed that threshold. Simultaneously, over the last five years, working with the Spatial Web Foundation and working with the IEEE, we've been developing these underlying standards for the Spatial Web. VERSES as a company has always been working at the intersection of cognitive computing and spatial computing, right? This idea of the Spatial Web, where the web moves into the world, means you have to deal with a world of billions of sensors and robots and drones and holographic data sets. This is the underpinning that says, yes, and you will need intelligent agents to run on top of all that.

Those standards were finally approved last summer, but were just ratified in June. This sets up an amazing new layer for the internet, for the web in the world that is being built into Genius, and which we've been building into the platform from day one, testing in various use cases. One of the finest moments, just to connect the dots to what we were talking about before, is that NASA JPL has been one of the first testers of HSML. We were able to demonstrate it in cross-platform interoperability in lunar simulations. This is an indication of several things. Number one, it's the ability to have both a game engine like Unity and a game engine like one of NVIDIA's Omniverse, multiple parties being able to interact with different lunar modules together and different rovers where they can interact in a sort of multi-agent capability across two separate platforms.

This, in a way, is kind of like the dawn of the Spatial Web. There are some videos out there and some great reports by Denise Holt and others if you're interested in that. On the R&D front, we had a big breakthrough with the RGM. This was Carl's breakthrough that showed you could build hierarchical models, what we call generative models for the first time. We're going to talk in a moment about how that starts to apply to robotics. There was also one you probably didn't catch called VBGS. It's in part because these acronyms are not very useful. I'm going to also get a connected dots for you on that. VBGS is the ability to have essentially computer vision that can sense the world in real time and update that model instead of a static sort of model.

We've all maybe looked at Google Earth or Google Maps and seen pictures of our house and realized, hey, there's my old car. I don't have it anymore. Real-time maps are actually the key to autonomous vehicles, robots, and the rest. Being able to do that on the edge is actually critical for that. The next big thing that we were able to demonstrate was AXIOM. This was sort of Atari test that many have been waiting on for some time. AXIOM was actually a bigger breakthrough than we anticipated because it was able to demonstrate this cognitive architecture with different modules for the brain, parts to do sensing, parts to model the world, parts to simulate what it should do, planning, and then acting. Finally, we started to tease something just a few weeks ago, the beginning of this month, which is called Habitat.

Habitat is actually a robotic simulation test against some of the top robot sims in a real-world scenario. Habitat stands for Home Assistant Bot, where you have to get a robot to try to set a table or clean up a house, practical things beyond, sort of, you know, breakdancing and doing flips. This is why this is quite a serious test. Now, again, why? Spatial computing and cognitive computing together opens up this massive potential market. Why are we testing with so many customers during that early beta time was to see, kind of test the boundaries of where we thought traction might emerge for us with various customers.

As we've gone into 2025, we've also, you know, with the launch of the commercial version of Genius, we now have a significant number of customers across a meaningful set of different industries that indicate the applicability of the technology, the unique value that it has, particularly in enterprise and industrial applications. We're going to touch on that briefly. Finally, you know, the world is starting to take notice. One of the complaints we've gotten from investors the whole time, which obviously, you know, we share, is how come the world hasn't been paying attention? In the last 60 days, the world has started to pay attention. We're starting to make headlines from everything, Popular Mechanics, digi nomica, Psychology Today, the sort of headline in the Wired AI newsletter. Just last week, I tripled the spectrum, which was, I think, a pretty meaningful milestone.

We're going to talk you through how all these sort of different ingredients come together into a cohesive, comprehensive product and platform and go-to-market strategy, which we've already begun implementing in the spring here, which we expect to only continue to build a greater and greater traction because the company is essentially at this key sort of inflection point where we've gone from this research phase into this revenue-generating phase. How does this all fit together? Okay, you've got AXIOM. It's kind of the ability to think in software in ways that are more, much faster, much, much, much more accurate, much cheaper, much smaller models. You've got these robotics demos, which we're going to get into a little bit more. You've got the Spatial Web, right? It's kind of Active Inference, Spatial Web, physical computing. At the middle of this is that goal.

How do you get real-world operations, not just chatbots, not just content generation? How do you make real-world operations more intelligent, more effective, better, cheaper, faster, and at scale? Let's face it, the knowledge economy is compelling. It's about 15% of the global economy. Most of the world is enterprise and industrial countries, 80% of the global economy. That is the total addressable market that VERSES is targeting. It's why we think we have a complete edge on most of the competition because, no pun intended, we designed these to work at the edge, not on big giant data centers that run gigawatts of power. Next slide, please. We're going to walk you through how these various pieces go together. Genius, in the form of the AXIOM, is all about adaptive real-time learning. I've said this before, but I want to make sure that you get it.

Every other commercial AI largely uses one form of training. They train on millions or billions of examples of something, and then they try to hopefully search over those examples after doing all the pattern matching and apply it. Every time a car breaks because steam is coming out of the ground, or every time a chatbot recommends you eat rocks for your daily diet, or every time a robot yesterday is trying to perform something, tweaks out and then starts spasming on the floor. That was the latest one in China. They've thrown an error. They try to search for the answer and they couldn't find it and they fail. They freak out. What we have here is an entirely new architecture. Now, I want you to think about this.

That reasoning problem, that ability to make sense and decide what you should do and then learn from what you've done is what you could think of as active. Everyone else is building static AI. What you want is the ability to have an AI that can adapt. This solves the hallucination problem. It allows AIs to self-improve, right? What we've done with AXIOM, the reason we started getting headlines around the world, is we beat Google DeepMind's top model at generalization. That means that AXIOM is able to learn how to, in the case of this very simple two-dimensional world, do spatial reasoning, right? Interactive reasoning. By the way, this is where ARC-AGI-3 is going next year. The hardest test, the so-called defined hardest test in the world for AI.

I would argue that AXIOM in Game World is already a demonstration of where everyone else is hoping to get to a year from now. 60% better gameplay than DreamerV3, that's Google's model. 7x faster, 39x more compute efficient because it doesn't need to crunch everything at once, and 440 x smaller. This is not by accident. This is by design, right? It's more efficient because it can just learn as it goes and adjust as it goes. Amazingly, the reason it's small is it self-prunes. It actually shrinks and gets rid of information it doesn't need. This is why you can build expert models that are geniuses instead of trying to build one supermodel for everything, which continues to have the same sort of underlying weaknesses and brittle reasoning that we see in these other systems. Spatial reasoning in two dimensions, but let's keep going.

If we think of this as a new product feature, that was sort of a Genius think, right? Thinking. What we released last year, which you probably didn't catch, was this VBGS thing. VBGS stands for Variational Bayes Gaussian Splats. It's basically a new kind of machine vision that allows for cameras powered by Genius agents to map real-world environments, objects, and then entire environments, rooms, etc. Here we demonstrated the ability to have a much greater accuracy, up to 10 x more accurate, 60% faster learning, and over 100x more compute efficient. Significant gains over state of the art, not by accident, not just through great engineering, but through better science. James, if you could just refresh the deck, I think there's a couple little sub points here that I want to make sure that we don't lose. What's that first part? What is AXIOM demonstrating?

The ability to have AI agents that can think and do spatial reasoning much, much more effectively than state of the art. With Genius SENSE, we now have the ability to have real-time machine vision. There's one last thing that's super important here. This real-time machine vision, it is continually learning. That means instead of having the kind of errors that often happen with current machine learning, which is called Catastrophic Forgetting, as things change in the environment, Genius SENSE allows a robotic system, autonomous vehicle, whatever, to update its model of the world, right? Again, back to that Google scenario where your car was sold three months ago, but they haven't updated it. In the world of real-time robotics, this is a critical key. This is why you get this kind of real-time forward loop where everyone else is doing backward loops over trying to solve everything.

You just learn, build a model, predict on that model, and then update your model. That forward loop is something that VERSES is pioneering. Now we've got a Genius Think. We've got Genius SENSE with not just real-time reasoning, but now real-time machine vision. This latest thing, which we started to tease, which we're going to share a little bit more with you, is how this applies to real-world actions. James, can you go full screen for a sec so we can see this a little bit bigger? This is a test against one of Meta's top robots. The paper and the blog will be, paper's out now. The blog will be coming. What you have now is a robot that is learning in real time, up to 93% faster. The alternative needed 300 million training samples.

Ours needed zero because we're able to give it this basic understanding of physics in the real world. This robot is then building this probabilistic model, what we call a Genius model, and it learns how to navigate through this environment just by reducing its uncertainty about what it needs to do. Each of these actions that you see are things that the robot was able to learn, and it was able to compose those. It knows spatial understanding. It has spatial reasoning capabilities. It's actually able to understand that its arm won't reach to get that apple on the other side of the table where it was told to put it. It decided by itself it needed to go around that couch.

This is a huge milestone in my opinion because every other form of robotics essentially has had what's called massive reinforcement learning, which is why it's so much more performative. It actually doesn't train at all, which might sound a bit weird. We do give it some underlying information that lets it understand where objects are and how to move about the environment. This is actually a breakthrough in robotics because classically you have either this sort of large data, reinforcement learning example where they get millions and millions, in this case 300 million examples for the Meta robot, or you have to program it in, just sort of deterministic logical steps. Now, James, a lot of the VERSES team, yourself, Hari Thiruvengada, our CTO, [Venkat], you know, you guys have backgrounds from very significant robotics companies, you know, Seegrid, Amazon, Andrew Gray, SoftBank.

[crosstalk] Yes, exactly. A big part of the team that we built here has this, I would call physical computing, spatial computing background, but in the form of robotics. Can you explain why this Genius ACT capability, which now is, we're transitioning from research into the product stack along with Genius SENSE and the AXIOM breakthrough for our future releases coming up here, why is this important? Why is this meaningful? What does this mean for the world of robotics and autonomous cars?

James Hendrickson
President and COO, VERSES

Yeah. I can start this. It gets pretty complicated on the robotics side. First, a couple of things. When we show simulations like this for robotics, the robotics space as a whole always uses simulations. When we show a simulation, it is a simulation that is undermined with the underlying physical hardware.

We can show the hardware, but it's expensive and takes up space and all these other things. The robotics space as a whole uses simulation for that. Just keep that in mind.

Gabriel René
Co-Founder and CEO, VERSES

That's a physics-based simulation. It's not a video game. It's a real-world physics-based simulation. These kinds of tests for benchmarking against competitors use that as if it were basically the same physics of the physical world and the environment that they're operating in.

James Hendrickson
President and COO, VERSES

Exactly, exactly. A couple of things here really stand out. The biggest piece of this is that the overall approach is goal-directed. We give the robot a goal, and we give the robot a knowledge of itself in this multi-agent hierarchical way that we're seeing here on the screen, where there is the end effector, which is labeled as pick and place module, the arm control, the vision system. All of those things typically are separate systems that do their thing and then communicate things back. That creates a number of problems because they're not operating as a holistic system, and they don't have any awareness of itself. When you go back to watch the video on the website or go back and watch this recording, there's a series of things that you'll notice: the robot is aware of itself and what its limitations are.

The reason in the video that it goes around the couch is because it knows that with the number of arm segments and degrees of freedom that it has, it can't reach its target location to place the item on the table. Instead of trying to go on tiptoes, which robots do not have, and reach across, it goes around the table, just like what we would do because we have a knowledge of ourselves. The biggest thing that's super interesting to me is that all of this begins to then be applicable to a variety of different types of robotic systems. I'm going to pause this for a second and tell you what you're looking at. We're back to simulation. Simulations are totally normal and what everybody does that actually simulates the real robotic world.

What you're seeing here is a green ball and a series of red balls, and the robot is goal-directed to get to the green ball without touching the red balls. What you're going to see is as we zoom out, we're showing all of this across a variety of different robotic spaces. You see basic arms, you see wheeled robots, you see cobot-like activity, you see manufacturing robots, you see a variety of different robotics things and different types of things that it needs to avoid or grasp. Those things in a traditional robotics environment are things that would have to be manually programmed each time you wanted to do something. This scales kind of infinitely, and it scales infinitely because it is an underlying system that is goal-directed, much like how we are goal-directed to go do something. There's a fancy name for this in robotics called inverse kinematics.

Robots are really bad at figuring out what their task is and then figuring out all the things that they have to go to do that. We use forward kinematics, which is here are all the steps that are taken and needed to go do this thing. I am not often surprised or blown away, especially by the work of our own teams. When this paper came out, my first thought was, oh, I've got to go show some people because this is an amazing thing that I have personally never seen done before.

Gabriel René
Co-Founder and CEO, VERSES

I know this has been echoed in many of the conversations we've been having around this. We're going to probably share much more about the sort of Genius ACT capability for robotics, autonomous vehicles, and other related real-world intelligence embodiments, the sort of physical AI stuff that you've seen in NVIDIA and everyone else talk about. In my opinion, what we've just seen is the last mile, basically the most valid approach to the last mile problem in robotics and autonomous vehicles.

If you want to look at the scale of the opportunity for humanoid robots as one, or you want to understand how every car company in the world is trying to figure out how to make those cars be able to learn and adapt in real time so they can solve problems at the edge and be able to then, as you've seen in that final example, share what they learned with other vehicles, other robots, even if they have fewer joints, even if they have fewer components, even if they're slightly differently organized, that now goal-based approach, that Active Inference. Now, let's tie it all together. Actually, what you were seeing there was a robot with multiple agents. The bottom was an agent, the body was an agent. Actually, the Genius SENSE was part of that vision.

That was a scan of the world, the environment, the tables, the chairs, the refrigerator. That's all that coming together with the AXIOM-like thinking, reasoning, and planning. You say, great, how do we take this out to the rest of the world, the hundreds of billions of IoT devices coming out there? How do we both define those goals that we want these different AI agents to achieve in physical systems and digital systems and together, working together as collectives? How do we set up the rules and policies for the things we don't want them to do?

That's where the Spatial Web comes in because the Spatial Web standards basically give us a way to translate human language and ideas around even about physical interactions in the world, spatial interactions, spatial computing into a language that computers, machines can understand that let us encode that information into the physical environment so that robots can understand those things. Maybe it's like, hey, pick up this apple, not that apple. Maybe it's don't go around the table because, you know, there's some policy that limits that. Now you can encode that information. Now, humans can't necessarily see this, but the robots can see this. The AIs can see this, and it allows us sort of digital twins to be embedded and encoded into the environment. Let's go to the next slide. We've been talking about these various different pieces, and I want to bring it all together.

You kind of got AXIOM, this is right, and this is the ability to have this sort of cognitive architecture that can think and learn in real time, designed to perform on edge devices, designed to adapt as it goes, designed to generalize across multiple different scenarios, spatial reasoning in two dimensions, right? You get Genius SENSE, and you get the ability to now sense real-world objects, real-world environments in real time, update that as you go. You get Genius ACT and the ability for robotic systems to be able to adapt, to adapt as they go, to be able to understand the environment, to be goal-directed. You're not giving them, giving them and telling them, hey, go do this. We're saying, go, go set the table, go wash the dish, right? Go put this away. Go put, and this is a, this is a complete breakthrough. Why?

Because what we're trying to get to is real-world intelligence. You cannot get to real-world intelligence because the world is messy and it keeps changing. Everyone else's approach to AI is to either hard program it into robotics or try to learn it from a billion examples and hope that that applies to every scenario. We all know that reality is undefeated. Now what are we talking about? These various pieces is Active Inference gives us the ability to have AI agents that can think. With that capability, we can apply that to things like real-world environments and robotics and autonomous vehicles. With the Spatial Web, we can give them the ability to share that and be able to set up not only those goals, but also the rules.

Now you're starting to see this ability to think, to be able to act, and for agents to be able to collaborate at scale. This is this next version of the web, which we believe will be powered by Active Inference-based agents that need to be able to think and act and collaborate and work together. Genius is the only one pioneering this approach. All of the work that we've been doing, including things like the RGM paper that I told you last year was going to be game-changing, that's that hierarchical model that is applied to this robotics example in our little Rosie bot here. Rosie, because she doesn't dance, but she can set the table. I think this hopefully leads us to this realization that some investors have said, hey, can you bring this all together so we understand?

Yes, the real world is messy and it keeps changing. You need AIs that have the ability to understand the physics of their environment, can adapt in real time, and enable real-world intelligent operations, specifically as that applies to the enterprise and industrial and real-world environments where we think the much bigger slice of the pie is beyond content generation and knowledge work. That's what we'll be rolling into the releases. If you look at just the traction we've had on the current release, because that's a sort of a trailer of things to come here, we've gotten amazing traction just in the last couple of months since the Genius release. James, why don't you walk everybody through that and then we'll wrap this up?

James Hendrickson
President and COO, VERSES

There have been a number of questions. I'm going to try to address several of them here regarding revenue and customer adoption and the partnerships that we have. We've announced several partnerships and several relationships so far this year using Genius Enterprise. What we're also doing is working behind the scenes with a variety of different enterprise and pro customers that I think are really, really exciting, not because of any specific named customer, although there are household names within that customer list, but because of the diversity of things like where they are physically, geographically located in the world. We have Australia, New Zealand, Asia, Europe, and North America from a variety of different industrial verticals. We have several CRM companies, several different banking institutions, several communications consulting companies, law firms, and large media and advertisement players.

The fact that we have this diversity of customers and users, and all of them are acting as both customers and as essentially future channel partners, is important. What I mean by that is they have their own use cases that they want to use Genius for, but they're also looking to have these be an amplifying effect to use them in broader and broader applications.

Gabriel René
Co-Founder and CEO, VERSES

They want to resell them to their customers, which in many cases are Fortune 500s that they're already servicing.

James Hendrickson
President and COO, VERSES

Exactly, exactly. The adoption from this has been, I mean, frankly, very, very strong. I don't want to say surprising, but from an organic perspective of that adoption, it's been very strong in a diverse group of users globally. I don't think we are at the position where we are going to be projecting, certainly not on this call, revenue or a breakdown of the enterprise versus pro customers. What we are seeing is pro is a very, very viable entry point for enterprise customers. They enter in pro and they move to enterprise. I don't have a breakdown for that yet, but that's the trends that we're seeing so far. I am very optimistic with the ramp that we have on the customer and partner side. I'm looking forward to showing more use cases as we deploy these things a little bit more broadly.

What is exciting about the previous slide where we're bringing all these things together, now we have a coherent and consistent and overarching story of tools that the partners can use and customers can use to pull from to solve business problems.

Gabriel René
Co-Founder and CEO, VERSES

You know, James, there's just one thing I want to highlight there, which is that it's not obvious, but one of the things that I believe is special about VERSES and makes me very proud is that we have a team of people that I don't think exists in any company in the world that have been able to take essentially bleeding-edge academic research, right? Things we've seen demonstrated in neuroscience with neurons and petri dishes and published in Nature magazine and build that into a software pipeline that then leads all the way to customers seeing double-digit gains like analog in smart city, you know, the type deployments and then across multiple different industries. That's not easy to build a team that can take that advanced science and build that into enterprise-grade software.

I think it's one of the reasons why we're starting to see that we're getting recognition across a whole spectrum of different players from standards bodies to partnerships with NASA and JPL to recognition from the head of ARC Prize who designs the ARC AGI benchmarks, François Chollet, to Wired magazine, Popular Mechanics, and Psychology Today. If you can't squint your eyes, you can't see that this broad kind of recognition is actually a huge acknowledgment of how powerful we've hit each of these notes along the way from advanced research to applied research to product development to early commercial sales to resellers. I think that these are the signals the market can't read. I think the people that are on the call today, you can read those.

I think that that's to your advantage because, in my opinion, I think the company is much more valuable than the market's been able to value it at. What we're very hopeful about as the next step is that now that we've checked these major boxes and demonstrated this foundational shift to really the growth stage of the company, I believe that the market is going to begin to wake up because, frankly, this is just the beginning. As you can kind of see from what's to come, there are much bigger dents in the universe we hope to be delivering here shortly.

James Hendrickson
President and COO, VERSES

Gabriel, I'm going to ask you one question that has come up repeatedly. Let me see if I can ask it in a way that allows you to carefully answer it. The questions have been around the ARC Prizes, if we have participated in the past, and what our thoughts are on participating in the newly announced ARC Prize 3 or ARC- AGI- 3.

Gabriel René
Co-Founder and CEO, VERSES

My first thought is that they're working on developing ARC Prize 3, and that will not come out until next year. We're in direct conversations with the team. We are providing feedback on how we think that test should be designed in order to actually be an adequate test. Ironically, I believe that, in my opinion, that AXIOM is kind of an ARC- AGI- 2.5. Go look at the scores on two now. I think Grok got 15% relative to humans. This interactive real-time reasoning is a much harder problem than the kinds of little puzzles that ARC 1 and ARC 2 are demonstrating. It's also why we didn't do ARC 1 or 2 because we're all about spatial reasoning. The ability to, if they can't do real-world activities, it's not compelling to us. That's not where we've been headed.

ARC 3 is headed to where we're already at, which is, can you do interactive, high-efficiency, adaptive, real-time reasoning? What François Chollet talks about is this idea of liquid intelligence versus sort of crystallized intelligence. That is really the stark difference I think we're trying to make here. We are exploring ARC 3. I would also argue that the demonstration we just showed you with Rosie and the Habitat is something closer to ARC 5. These benchmark tests are being designed for a very different set of reasons, but they're arriving at many of the same conclusions. VERSES is on its own path, and we're not here just to do performative stuff.

In fact, one of the reasons AXIOM took so long was it wasn't just about showing up and saying, "Look, we did Atari." It was making sure that we could build that into Genius, which required design decisions that needed to be made that we had to learn about along the way. We were right. Frankly, we squashed the competition. I think we'll continue to do that. I would be surprised if we don't participate in ARC 3. I actually, at this point, you know, we will actually be influential in how difficult ARC 3 will be. Frankly, we're already, in my opinion, surpassing that because at the end of the day, what we're interested in is not academic or industry benchmark tests.

We're interested in real-world deployments and solving the sort of massive multi-trillion dollar last mile problems for robotics, for autonomous vehicles, for intelligent data and decision-making and prediction. I think that that's why we're starting to get the kind of pickup and notice that we're getting. Would it be nice to also, you know, beat everyone at ARC 3? Yeah, it would be nice.

James Hendrickson
President and COO, VERSES

That is a great answer.

Gabriel René
Co-Founder and CEO, VERSES

Why don't we wrap it up here? There's an article that came out back in 2018, when I first became aware of Carl, which was a little bit after that, that said the genius neuroscientist who might hold the key to true AI. One of the reasons that we call the product Genius is for this simple idea that the underlying approach here has everything to do with good science and understanding how the brain actually works and how biological and natural intelligence works in the world. It is adaptive. It is flexible. It is goal-directed. It is ultimately collective. If you look at the features that VERSES is working on, forget all the research, forget all the acronyms, forget all the benchmarks. What are we doing? We're building a system to solve these last mile problems for enterprise industrial applications, make everything a little bit smarter, right?

That means that you have to have systems that can model and think. That's essentially the capabilities of the mind that are embodied, that can sense and act, right? Which we've just demonstrated model and think with AXIOM. We've just demonstrated sense with VBGS, right? And ACT with the Habitat robotics example. That can then work together at scale. That's how you scale. You don't scale by building, by getting more and more data, by getting more and more chips, by getting more and more energy in larger and larger data centers, right? You scale naturally with more and more agents. This is why the collective piece in the Spatial Web is important.

The ability to talk with each other, the ability to share, to do it in ways that are very trustable, where we can build that sort of the rules into these systems, especially if we're giving goal-directed directions to autonomous agents, and to have models that they can share together, right? For everyone here today, you're trying to build a more accurate model of VERSES as a company, right? How do I get a good sense of the value of this company? Hopefully, what we've been able to show you today is that we believe that we have, and have now proven and demonstrated, the ability to leapfrog the rest of the industry. Right at the time of that, the headline news continues to highlight the sort of hallucination problem and the fundamental errors of their architecture.

We took a different path, and that path is a bit of a zigzag, and it takes longer. Because it's frontier work, sometimes there's a river and there's a mountain, and you have to figure out how to go over, under, around, or through. That's what we've been able to consistently do the whole time. That's what we believe we'll be doing next. I believe that this is our finest hour, and that we'll be able to demonstrate that in the weeks and months to come here. Thank you, everyone, for your time. James, have you gotten any final answers?

James Hendrickson
President and COO, VERSES

I'm going to try to answer a couple of the questions that were major themes that came up. I really appreciate the questions that came up. There were a number of questions around our future uplist plans. For what I hope are fairly obvious reasons, we cannot really talk about that. We are still actively pursuing that, and that is still very much part of our strategy. We are at, more or less, the mercy of, but working closely with, the exchanges and the regulators to make sure we're well positioned for that. More to come on that, but we really can't share right now. There was a number of questions around revenue. The customers that we showed and that we didn't show, all of them are, or at least the majority of them are, revenue-producing. The revenue that we're producing is aligned with the value that we're providing them.

We expect that to grow. We are not giving specific direction or specific revenue forecasts, but there's revenue associated with the customers that we are talking about and that we're working with. We're very, very clear that this is a for-profit business, and we want to make sure that the great work that the team is building is reflected in the prices that we charge and the benefit that customers get from it. I think those were the majority of the big questions. We want to make sure that we are as available to you as possible. Please reach out if you have questions. Also, one of the questions was a demonstration of Genius, which we will look at doing more of. I think those are the major questions. If we did not answer something, please reach out to us.

You can reach us on our website, reaching out to ir@verses.ai, or a number of different channels. You can reach us on LinkedIn. We will do our absolute best to get back to you. We really, really value all of the support that you guys have given us over the last several months and your kind attention to this webinar and listening and asking great questions. I think we'll end it at that point. Thank you very much for participating, and we look forward to seeing you again soon. Thanks.

Gabriel René
Co-Founder and CEO, VERSES

Thank you.

James Hendrickson
President and COO, VERSES

Bye-bye.

Powered by