Welcome to the webinar, The Intersection of AI and Spatial Computing: Bringing the Power of AI into the 3D World. At this time, all participants are in a listen-only mode. A question and answer session will follow the formal presentation. As a reminder, this webinar is being recorded. Before we begin the webinar, I'd like to remind everyone that statements made on today's webinar, including those regarding future financial results and industry prospects, if such are made, are forward-looking and may be subject to a number of risks and uncertainties that could cause actual results to differ materially from those described in the webinar. I would now like to hand the webinar over to Lyron Bentovim , President and CEO of the Glimpse Group. Liran, the floor is yours.
Thank you everyone for joining us today. In the last few earnings calls, we have discussed our strategic transition to focus on providing enterprise-scale spatial computing, cloud, and AI-driven immersive recurring software solutions, or SpatialCore, as we refer to it internally, led by our subsidiary company, Brightline Interactive. Today, I want to take the time to allow investors a deeper dive into SpatialCore, providing an overview of what it does and showcasing where it fits in our strategic ecosystem. I am joined today by Tyler Gates, Glimpse's Chief Futurist and General Manager of Glimpse's subsidiary company, Brightline Interactive, who will take you through this overview. After Tyler's presentation, we will open the floor for a brief question and answer session. And now I will turn it over to Tyler. Tyler?
Thank you, Liran. Really appreciate it, and thank you everyone for joining. I'm just gonna provide a really kind of an introductory into the points of value in the spatial computing market, and specifically what Brightline Interactive is doing to pioneer, to lead, and really to advance not just the technologies, but really how the technologies can be understood by the market, by our customers. What we're really talking about, you know, the Brightline's impact in the space, where we are is we're camped at the intersection of AI and spatial computing. Really, what we're talking about here is being able to. What we're doing is we're building tools that are built on what are called open standards. A big difference between open standards and open source.
We're not. It's not open source. We're building on standards that are open to the general public, that allow us to create and generate interactions and software tool systems that can be compatible in a backwards and forwards capacity, and what we mean by that is we're able to use software systems that may have been invented twenty years ago that provide significant value in the spatial computing market, and we're able to integrate them into our platform that looks forward into future integrations, and there are a couple of different ways that we go about doing that, but what we're doing overall is we're generating these multimodal synthetic data systems that supercharge the output of what are called AI workflows.
Typically, when people talk about artificial intelligence, we're really in sort of modern day, most people are talking about artificial intelligence that relates to the two-dimensional world. Really talking about AI into text. That's what you see with things like generative pre-trained transformers or GPTs. You see AI really affecting and having an impact on text and large amounts of text, large data lakes of text. Really, what Brightline is focused on is putting AI into 3D. What we mean by that is using artificial intelligence workflows to affect the three-dimensional world. By three-dimensional world, what we're talking about is the three-dimensional world, the real world, but also having a virtual representation of the real world. Most people would refer to that as a digital twin. These digital twins that we generate, we do that using real-world sensor information.
Spatial computing capabilities, they really transform the interaction between the physical world and the digital world. In fact, there is, you know, there's really no difference to lots of different machine-led systems like an autonomous vehicle or a robotic system. There isn't a difference between the physical world and the virtual worlds. The virtual world, in fact, in many ways, represents an environment that is much faster, much more capable for training sensor systems, autonomous vehicle systems, and various other different types of computer-led systems that need to operate in the real world. Just for a slight opening up into the world of spatial computing, what we're talking about in terms of growth, the spatial computing market is experiencing significant growth.
In two thousand twenty-two, it was valued at $57.2 billion, and it's expected to reach $150.5 billion by two thousand twenty-eight. That's with a CAGR of nearly 18%. But in fact, most signs point towards a $620.2 billion dollar market by twenty thirty-two, with a CAGR of 18.3 from twenty twenty-four to twenty thirty-three. And so what we're talking about here is white space in terms of revenue, in terms of the potential for a significant value over the next five to ten years, and specifically how Brightline is situated. We are at the intersection of these two major forces in computing technology, represented by artificial intelligence, and the second being represented by spatial computing.
And really, so spatial computing, what we're talking about in terms of impact, is we're talking about being able to pull together major forces that exist within the existing general purpose computing market. We're talking about open standards. We're talking about IoT and smart sensors. We're talking about 5G network. We're talking about using cloud, not just for storage and for transport, but for compute. So instead of computing on a desktop machine, we're instead computing on a data center that is, you know, that operates sort of, you know, as they say, in the cloud. But that data center is operating lots of graphics processing units, lots of GPUs and CPUs that are all connected to one another inside a very large sort of, you know, warehouse-style environment.
So we're talking about a computing environment that's very large and that allows us to perform computations that exist at a global scale. And how we do that is using geospatial data. So we create an understanding of the physical world in virtual capacities through the use of digital twins by accessing geospatial data. That's effectively how we sort of create the basis upon which we then provide computation.
A lot of people have referenced these large data lakes as they're sort of referenced as really just what has been going on for the last twenty or thirty years, maybe even longer, is this really deliberate effort by major corporations, public sector customers alike, to collect large amounts of data, how their systems operate, how things need to be built in the future, how people need to be interacting with systems, how those systems and those human beings interacting together then provide better decision-making in the real world. All of that data has been collected over the last several decades, and the question has been: What do we do with big data? What do we do with all of this data that we have?
And we really believe that we have come up with a very important, very meaningful answer, and that is we use that data alongside AI workflows so that it's not just our hands on the keyboard that are generating the awareness and the inferences, the insights. But that big data mixed with artificial intelligence allows us to then perform really meaningful computations on top of real space. And I would simply say that computing on top of space is what spatial computing is. All right, Liran, we can advance to the next slide. So really, how does that manifest? For Brightline, that manifests in a product platform called SpatialCore. SpatialCore is a back-end infrastructure for scalable and interoperable spatial computing and AI tools. So what does that mean?
That means that there have been significant advancements in the ways that very large companies, like Microsoft and NVIDIA and several of others across the globe, have been making advancements in the world of AI and spatial computing. And really what you see is, you see companies like Microsoft and Amazon and others, and Google, providing cloud computing capacities. This ability to not just compute on the computer under your desk, but actually to compute outside of your office, outside of your manufacturing facility, outside of the campus, and to be computing on a data center. And then being able to transport that computing over a network, over something like a 5G network, out to the points of need.
And what that allows for is the ability to compute very, very large amounts of data off your actual campus, but in a way that is very secure, for the purposes of being used by government entities, being used by private industries, being able to be used by civil services across the globe. And what happens there is. What this enables is this ability to take sensor inputs from the real world. So these sensor inputs can be from satellite information, they can be from autonomous vehicles, it can be from sensor systems that exist in a smart city. We're talking about collecting information off of satellite imagery. So there's lots of different ways to pull information from the real world, and we take that real-world sensor input data, and we bring that into what we call an immersive, unified command interface.
These command interfaces are really subject. They're really different for the different users of our SpatialCore platform. SpatialCore, it's the emphasis on being back-end. It's not necessarily so deliberately sort of delivered on the front-end capacity for a customer, 'cause that customer, really what they want is an application that they own entirely. A system, a platform that they have complete ownership and control over. Really, what they then need is a back-end operating system, a way to then operate all of those different functions that they're performing in relationship to spatial computing. That unified command interface is really built from what we would call microservices, computing frameworks, using different formats for scene understanding, to understand the physical world in a virtual capacity, using virtual formats of sensor data, sensor inputs.
And then, what's, you know, sort of beyond all of this is then being able to use all of those systems to generate synthetic data. And the synthetic data generation, capacity, what it really allows us to do is to then, generate information, insights, and awareness that enhance AI capabilities for the end customer. And so what you have is you have sensor data that's coming into the system, you have a unified command interface, which is sort of the back-end environment that allows the customer to basically reformat and change around their different microservices so that they can provide enhanced AI capabilities on the front end.
Those enhanced capabilities are things like being able to understand the physical environment at your manufacturing facility, not just have pictures of it, or have videos of it, or have a digital twin that everyone can look at, but to have a digital twin upon which computing is done so that you can, we can generate understanding, information, and context from that virtual environment. That virtual environment then allows us to train various systems. Lots of manufacturing circumstances in our use cases involve robotic systems, so having robotic systems that are operating simultaneously with humans inside a large manufacturing circumstance. In order for those robotic systems to truly learn how to operate most efficiently and effectively and safely inside that environment, and to provide the most efficient interaction with humans, is that those robotic systems have a true understanding of that physical environment.
And how those robotic systems gain that understanding is from training and learning that in a virtual environment. And so the virtual environment allows for the ability to train all different types of systems, whether they're standalone sensor systems, cameras, and things like that, or they can be full-on robotic systems that are fully autonomous, or those robotic systems are guided by a certain set of principles, or it can be really a combination of all of those things working together, humans working with robotic systems inside a defined space where it's very important for all involved to be able to make the best decisions possible. And so really effectively, what we generate through SpatialCore are what are called decision support systems. These decision support systems make it easier for humans to make the most effective, efficient, and most meaningful decision in really critical moments.
Lots of our work falls within the category of public sector. So public sector meaning working with the Department of Defense, working with civil government customers as well. And in lots of those civil government customer circumstances, it's really valuable to be able to have accurate digital twins of a real-world circumstance, because there's a need for a civil government customer to have to work with lots of different constituencies in their circumstance. And so the circumstance may be such that we're on an airport, and there's a government customer who needs to then be able to have full command and understanding of not just the physical space, but all of the virtual things that exist within that physical space.
We're talking about not just airplanes and vehicles on a runway, but we're also talking about all of the sensor systems and all the different frequencies of connection and interaction that are going on at the same time. It becomes really valuable for that government customer to be able to have control and total understanding at all times of the interaction between the physical world and the virtual world. It becomes even more valuable because the government's awareness of the totality of that allows them to better interface, better interact with a hotel property, with a restaurant property, with the city planners, with a commission, with a safety commission, with emergency management, and all these different constituencies that basically play really vital roles in how an airport functions.
And so it becomes really valuable to have a spatial computing platform that is operating with these AI workflows that provide much faster, much more relevant, much more contextually rich decisions that are being made, not just by humans, but also, at the same time, by autonomous systems, robotic systems, and the like. And so SpatialCore is a new way of approaching this, the challenge of spatial computing. It's an operating system, a way to be able to affect the physical world through affecting first the virtual world. There are lots of different points of value that we generate inside that virtual world in markets like training and simulation.
In the training of, like I was talking about, actual human beings to understand how to more safely operate inside their work circumstance, while at the same time training non-human systems, sensor systems how to operate effectively inside the real world. And there's no better way to train all of these humans and systems than by doing that in what we would call an authoritative source of truth, a one-for-one representation. Not a video game understanding of the Earth, not an artist's rendition of the Earth, but an actual geotypical, one-for-one representation of the Earth. It is effectively what NVIDIA and Jensen Huang call Earth-2 , being able to have a second version of the Earth that exists totally virtually, so that in that virtual space, you can perform all types of functions.
These functions are, you know, vastly impactful in the simulation and training space, but well beyond that in the understanding of how to advance a physical environment by being able to create it first in the virtual space. All right, so the next up, we're gonna talk a little bit about the market and kind of how we fit, where SpatialCore fits into what is referenced as an overall tech stack. So Liran, if you just advance to slide four here. What we see here on this slide is really a high-level explanation of where SpatialCore fits, sort of sandwiched between the points of value of cloud computing and then advanced network and the immersive interfaces.
I haven't spoken very much yet about things like virtual and augmented reality and simulations, mostly because where Brightline primarily focuses is sort of behind the headset, behind the tablet, behind the simulator. And we really focus, we're primarily camped in the spatial computing function area. So we're really kind of what happens before it gets to the headset. So we work very closely with major partners like HTC and many others, who exist to create really impactful, really meaningful head-mounted displays or tablets, or full motion simulators and these sorts of things. And really, what makes those headsets or tablets powerful is the content, is the computing that's coming into them. And so how do those systems get really valuable computing?
Well, they get that through a platform like SpatialCore, because SpatialCore is really the, it's sort of a little bit further behind the headsets and the devices, and it's really more so in the operating system of how those devices and headsets will actually be able to experience spatial computing. It's really valuable that Tim Cook can get up on stage and talk about the Apple Vision Pro being a spatial computing device, and it very much so is that, but it is also just a device. It requires spatial computing to be delivered to it in some form or fashion. And so really, where Brightline is camped is really in the area of providing that spatial computing to these really meaningful interface systems. And so that's why you see at the bottom of this chart here, you see immersive interfaces.
That's mixed reality, virtual reality, and various simulator systems. And really, what's just above that then is network, advanced network. We've been partnered for many years with AT&T, and we've been working on millimeter wave 5G applications and technology delivery systems for about five years now. And really, what this allows us to do is to see the future, certainly the now, but also the potential distant future, of being able to leverage millimeter wave 5G and other frequency bands of 5G in order to provide immediate access to spatial computing services, do spatial computing services that are operating on top of AI workflows and on top of cloud compute circumstances, and to be able to actually deliver dimensional, three-dimensional information at speed directly to the point of need.
Lots of people would sort of reference this relationship as cloud systems interacting with edge computing, and how edge computing can get really meaningful information is through advanced network systems like 5G. And then you have SpatialCore sort of sandwiched between the advanced network and the interfaces and cloud computing. And within SpatialCore, what we're interfacing, what we're integrating are these microservices that operate within these digital twins, one-for-one representations of the physical world, that then leverage AI workflows, so that we can provide value not just through systems or applications that we write on the computer, but also through applications and systems that are written by AI systems that generate what we would call emergent qualities or novel intelligence. Information that we, as humans, could have never processed or put together ourselves.
But what we're actually doing is we're generating that value by using AI systems, AI workflows, to generate value from a digital twin that is operating some sort of microservice, let's say, to measure the distance, let's say, between two important buildings on a manufacturing facility. To measure not just the distance between them, but all of the other non-visual things that would be meaningful to understand in order to maybe build a third building in between those two buildings, or to be able to perform a really meaningful simulation for safety training or something like that.
It would be really valuable to have a true 100%, 360-degree understanding of that physical environment, and the way that we would do that is to have that understanding be held in a virtual space and then to perform compute functions on top of that. We would reference those as microservices and AI workflows. And then what you have at the top there of this chart is cloud computing. Inside cloud computing, we're talking about accelerated computing, which is, again, what you hear Jensen talk about oftentimes, at a GTC or a SIGGRAPH presentation from NVIDIA. We're talking about on-demand scale, being able to provide computing services that are on demand based off of the need at the edge level or at the manufacturing or at the, you know, at the government facility.
You may not have all of the appropriate amount of computing at your actual facility, and you may need to be able to sort of load manage that scale, so you may all of a sudden have a need to be able to have, you know, thousands of people access the same simulation environment that's representative of your facility, and that need may arise in less than twenty-four hours, and it would be really meaningful to be able to provide that amount of scale on a moment's notice, and the way that you do that is over cloud computing, and then cloud computing offers a third and final piece here that's very important to us, which is a secure computing infrastructure.
And so oftentimes, when we talk about cloud, and the fact that we work with public sector customers, we get the question: Well, how do you keep this secure? And we do that by providing both on-premise computing solutions using cloud resources. And then, if we're off-premise, being able to do that in secure cloud instances, in our partnerships provide those secure cloud instances to be compatible with all of our public sector customers. And so that partnership ecosystem represented on the right is in relationship to these three or four main components of this tech stack. Microsoft is our leading partner on the cloud side. We work with Microsoft specifically on accelerated computing circumstances for on-demand scale and for, as I was saying before, secure infrastructure.
NVIDIA is who we leverage for lots of different AI workflows, for microservices, and for support with our digital twins. NVIDIA allows us to build these modular tool sets, where we can use the same tool systems for various different products and microservices for different use cases. It's very, very efficient and allows us to scale our services to our end customers very quickly and very efficiently. And then, as I mentioned, AT&T, they are running one of the world's most powerful and fastest network systems in AT&T 5G, which we have significant experience deploying for our various customer circumstances.
And then at the bottom there, you see HTC, and we've been partnered with HTC for several years now, and they provide us head-mounted displays that work really well for our customer use cases, where we can stream directly to those headsets. So they're what are called tetherless headsets. They don't have a cable coming off the back of them so that you're anchored to a computer. We're able to walk around in rooms, walk around outside and perform, you know, full-on virtual reality and immersive simulations inside these head-mounted displays.
So they really just provide a way for us to deliver to our customers exactly what they need, which is to be able to have access to spatial computing and AR workflows at their own facilities, inside and outside, in a capacity where they can move all around the space and not be tethered by a cable. And so really at a high level, in summary, what we're talking about is SpatialCore. What Brightline provides is SpatialCore as a platform out to our public sector and enterprise customers, so that these customers can get the value out of AI. And again, we're not just using AI for two-dimensional things like text.
We're putting AI into 3D, and we're allowing our customers to have a never-before-seen, never-before-understood context for the physical world, because it is a physical world that we can recreate in a digital or virtual capacity so that we can perform computing functions on top of that. And these computing functions are being performed by our, you know, incredible team, while at the same time being performed by our really valuable partnerships in working with Microsoft and NVIDIA on building really important AI workflows inside these environments. So with that, I will pause there, and I'll turn it back over to you, Liran, and we can certainly open it up for questions.
Thank you, Tyler, and I want to thank every one of you for your interest. And now I'm going to turn the call back to the operator, and we're going to take some audio or written questions if they come up. Operator, the floor is yours.
Thank you, Liran. If you'd like to submit a question, you can either type it in the chat box below or raise your hand by simply pressing star one on your touch tone phone. Pressing star two will remove you from the queue. We'll start with any audio questions and follow that with some write-in questions as time allows. Once again, please press star one if you have a question or a comment. Once again, please press star one if you have a question or a comment. Okay, there are currently no questions in queue. At this time, we can turn to some write-in questions. Again, if you'd like to ask a question, please use the chat function. One moment while we poll for questions. Okay, we have no questions in queue. I'd like to turn the call back over to Liran for his closing remarks.
Thank you. I would like to thank each and every one of you for joining our webinar. I hope you are leaving with a better understanding of the opportunity and the positioning for SpatialCore. If you have any further questions that we have not addressed today, feel free to contact us, and we would be happy to schedule a time to talk.
This does conclude today's webinar. Thank you for your participation, and have a wonderful day.