Nano-X Imaging Ltd. (NNOX)
NASDAQ: NNOX · Real-Time Price · USD
1.880
+0.145 (8.36%)
Apr 27, 2026, 10:49 AM EDT - Market open
← View all transcripts

AI Day 2021

Oct 27, 2021

Ran Poliakine
Founder and CEO, Nano-X Imaging

Good morning, good afternoon, and good evening, everyone, and welcome to Nanox AI Day. My name is Ran Poliakine, and together with me today, Erez Meltzer, our my partner and our incoming CEO. Today we're going to try to take you through the journey of AI, try to tell you more about Nanox AI. We have great speakers, and we're very excited to share with you the thought process we've went through achieving this Nanox AI idea. Before we start, I would like to ask Erez maybe to share a bit of your thought of why AI is so important in today's healthcare domain. Please, Erez.

Erez Meltzer
Director and Incoming CEO, Nano-X Imaging

Ran, you know, we spoke about it, I think about a year ago, that no matter how best would be our equipment and technology, without software and AI capabilities, we're not going to achieve the competitive edge that we want to achieve in the market. The only way to get the analytical tools in order to develop what we're trying to develop in the future, then that's the best is to get some source of AI and especially data, which is the critical factor in the ability to develop these AI solutions.

Ran Poliakine
Founder and CEO, Nano-X Imaging

Okay, Erez. Let's put it to work. Today we're going to have an amazing speaker. Actually, we're going to try and take you through a journey that will start from understanding what AI is all about, then narrow it down into a bit healthcare and AI, narrow it down a bit more into what Zebra Medical is doing, and then wrap it up for you about, you know, Nanox.AI and our story. Now with a great pleasure, let me introduce Michael Zolotov. Michael is the co-founder of Razor Labs. Hi, Michael.

Michael Zolotov
Co-Founder and CTO, Razor Labs

Hi, Ran. Thank you very much.

Erez Meltzer
Director and Incoming CEO, Nano-X Imaging

Hi. Hi, Michael.

Michael Zolotov
Co-Founder and CTO, Razor Labs

Pleasure to be here.

Ran Poliakine
Founder and CEO, Nano-X Imaging

Let me just say we know each other for many, many years, and Michael is one of the best, in my opinion at least, one of the best AI pioneers in the world today. He's also a great storyteller. I will ask Michael to tell us a bit about AI, how it came about, and what does it mean in a way that the audience and us can really understand. Please, Michael.

Michael Zolotov
Co-Founder and CTO, Razor Labs

Thank you very much, Ran. Pleasure to be here, and thank you very much. My name is Michael Zolotov. I'm the Co-Founder and CTO of Razor Labs. We're a public company that makes industrial machines smarter by assimilating deep learning into them. In these 20 minutes, I'm gonna speak about what is deep learning, as you know, the most cutting edge technology within AI, how it works. We're gonna speak a bit about applications of deep learning and eventually what I think is the future of deep learning. What is deep learning? Our brains, our human brains are composed of, on average, 100 billion neurons. Each of these neurons has a dendrite that gets signals from the previous neurons and an axon that sends the neuron signals to the next neurons.

Each neuron gets input signals, performs some decision, and outputs an output signal to the next neuron. Today, and actually in the past, I would say almost a decade, we can actually simulate this process within computers. Essentially, mathematically, each neuron has several inputs that it weighs together and sends the result to the next neurons. The cool thing is we can build these neurons into what we call neural networks, and these neural networks can do pretty cool things. In this example, let's say that my goal is to identify faces, to recognize faces. This goal is given to the neural network, and the neural network will learn on its own the relevant features that are needed to perform the tasks.

No human being will need to guide the neural network on what to learn, and it will do it on its own. The first layers of the neural network on their own will learn low-level features such as blobs, edges, textures, and so on. In the middle of the neural network, mid-features will start to construct, for example, a nose, a chin, an eye, and so on. Finally, in the last layers of the neural network, it will learn by itself to construct these parts of the face into the concept of a face, allowing it to differentiate between different faces and to recognize a face, which is the task that it was given. The cool thing is that it's not only about images.

Essentially, any form of data that has any patterns in it can be put into the neural network, and once we give this neural network a task, it will do the task that we desire. It can be images, sound, videos, text, and obviously also medical images in any medical modality. Once we give it a task, it can be classified between objects, detect objects, measure some parameters. It will learn automatically the features that are needed in order to do that the best possible way. Why is it interesting? I mean, why we're speaking about deep learning? Essentially it gives the best possible results in any other algorithm known to mankind. This is a really cool graph that I like. What you see here is the top 20 results, the leaderboard of a competition called ImageNet.

The ImageNet is the most famous AI competition in the world. I mean, any AI engineer is familiar with it. Essentially, we take three million images of 1,000 different objects, and the goal is to be able to recognize the object. It can be a cup, a train, a dog, a cat, a keyboard, and so on. In the blue dots, you see the top 20 leaderboard contestants. You see that we roughly in 2012 hit a glass ceiling of roughly 75% of accuracy. It means that the best algorithm in the world, the old-fashioned algorithms before deep learning, this was the glass ceiling that we were able to get. In 2012, you see the start of the revolution.

The worst neural network in 2012, which in today's standards, is by far worse than what we have today, was by far better than the best traditional, what we call computer vision algorithm out there. Basically, deep learning was able to break this glass ceiling and was able to also surpass human competence in this competition. Basically today, whether it's Google Photos, Google Translate, autonomous cars, your Facebook feed, Siri, all of them are guided by deep learning because it gives you results that no other algorithm can give you. This is essentially what we call the AI revolution, the entrance of deep learning and its replacement of the older traditional algorithms. Let's speak a bit about the different applications of deep learning. Deep learning, as I said, today, is practically everywhere. These are only a few of the examples.

In healthcare, we can use deep learning to optimize the schedule of operation theaters, making sure that the maximum amount of patients get treated every day in the operational theaters in the hospitals. In natural resources, we can make sure that the methods that we use to construct our world are extracted in the most efficient way, and we maximize the throughput of each of the machines in the pipeline of the extraction process. In utilities and power plants, we can optimize the routing and the resource allocation in these fields. In manufacturing, you see here a gearbox, a gear of Honda and Toyota, and you can actually see that defects can be automatically identified and removed from their production line. Here is a cool example of logistics, right?

Here the goal is to minimize what we call turnaround management, to minimize the time that the plane is on the ground. This can be done in any regular CCTV gate camera. What you see here is that deep learning not only recognizes in real time the different objects that interact with the airplane, but it can also identify the processes that are therefore happening to the airplane. For example, it can be cargo unloading, catering, passenger loading, and so on. The goal here is that deep learning can automatically monitor this process, and once one of these processes overrun a predefined amount of time, it can alert the relevant stakeholders that can address this issue at once. Another example is retail.

This is another sophisticated example of what can be achieved with deep learning, and you can see that with very regular CCTV cameras, not only that we can detect the different joints, the key points of the person, but we can analyze its movements and detect places where the person took some item from the shelf and then returned it. Therefore, what we can do is we can create a heat map of the entire store and specifically specify the places where the shelf might need optimization by the relevant stakeholders. These are just a glimpse of the numerous applications of deep learning that are essentially everywhere today. Let's speak a bit about the future of deep learning and how I personally envision it. Today, deep learning has, in most applications, three main drawbacks.

It needs very large amounts of data to train. For example, in the ImageNet example, it trains on millions of images. The problem is that these images needs to be labeled, right? Every image has a label of whether it's a car, a dog, a cat, and so on. In some applications, it's either very expensive or we simply don't have it. For example, in the medical field, labeling each example is a very costly process. More than that, it might also introduce human bias. One of the futures or the potential futures of deep learning is going from supervised learning, where each example is labeled by a human being, and the neural network learns through trial and error, to unsupervised learning, where the task that the neural network is given doesn't need any labeling.

I'm gonna give two examples to it. The first example is called reinforcement learning. In reinforcement learning, reinforcement learning is used today in robotics, in autonomous driving, in optimization of complex industrial processes. The concept of reinforcement learning is that you don't need to label anything, but you need to define a reward, some KPI that the neural network tries to maximize using trial and error. The goal here in this breakout game, the cool game example, is to basically kill or break as many bricks in a minimal time. You see that in the beginning, the neural network is not so good. It's basically it almost cannot do anything. After two hours of training, it plays really well, really like an expert.

It hits the ball all the time, and it really manages to get good scores. The cool thing is what happens after four hours of training. This is where, when it reaches superhuman capabilities. It actually understands that the best strategy to break as many bricks as possible with the minimal time is to basically dig a tunnel on the side of the screen and put the ball on the other side of the tunnel, breaking much more bricks with the minimal time. Imagine this kind of application in autonomous driving, in robotics, in complex industrial optimization tasks, and so on.

The cool thing is that if you don't need labeling anymore, you can build much larger models that can perform tasks in a much more sophisticated and creative way. A human being has roughly 100 billion neurons in each of our brains. A nominal neural network has actually a much smaller amount of neurons, roughly, tens of millions of parameters. You can see here also other organisms such as ants, worms, frogs, and so on. The cool thing is that once we don't need labeling anymore, we can start building much larger neural networks that are starting to be in the same order of magnitude to what we have in our own brain.

Literally three months ago, we had a real breakthrough when the largest neural network in the world was trained, roughly having one-thousandth of the number of parameters than in, than what we have in our brains. It was trained on a huge corpus in the entire Wikipedia, in many encyclopedias on a very large portion of the Internet. The task was basically to just try and predict the next word in the sentence. A very simple task, yet it doesn't require any labeling. I want to give here a glimpse of two demonstrations that this neural network could perform. The first demonstration was question answering Wikipedia, okay. Here what you see is just a regular surfing to the Wikipedia website.

You can just ask the neural network whatever question you want from this page. For example, why is bread so fluffy? The neural network can read the page, understand the question, and actually give you the answer from the page. For example, here it actually says, "The rapid expansion of steam produced during baking." and so on and so on. This is the reason why bread is so fluffy. Despite the fact that this question in these words obviously was not in Wikipedia, you can even click, and it will show you the exact location where it learned this new information. These are things that up until a few months ago, we thought, I mean, only humans could do them.

The last example that I'm gonna show, and I'm gonna stop there, is creative writing. This neural network was given a short text, and the goal of the neural network was to complete this task. The neural network was required to write a short story, so this is the text it was given. A short story is only a couple of paragraphs long. This award-winning short story is by Neil Gaiman. This was the text that it was given, and it was asked to complete it from there. The text that I'm gonna show you in English is a text that was generated by this neural network without any intervention of any human being. This is what the neural network wrote: "I come out of the cocoon naked. The chrysalis is lying there empty.

My family and the doctors and the nurses all gasp and say, "You're beautiful." I am, of course. The transformation is complete. I am beautiful. I have perfect golden eyes, six arms and wings, like butterfly wings." and so on and so on. You see, this is beautiful text that was just generated with a neural network that just learned a very large chunk of Wikipedia and encyclopedias and a chunk of the Internet. I'm gonna stop here to let you dream from here on. That's it. Thank you very much, and thank you, Ran and Erez, for the opportunity.

Ran Poliakine
Founder and CEO, Nano-X Imaging

It's a pleasure to get your view on AI and also to get to know how much more we can expect in the future. I would say that in terms of AI and from the time I know Michael, I think that this is a very, very fast-moving industry and what we need to do in Nanox.AI, of course, is to stay ahead of the curve and people like Michael are actually helping us to do so and be rapidly updated because it seems like things are very, very happening every day and very relevant to what we're trying to do. Thank you very much, Michael, for the time today.

Erez Meltzer
Director and Incoming CEO, Nano-X Imaging

Thank you, Michael.

Michael Zolotov
Co-Founder and CTO, Razor Labs

Thanks for having me.

Ran Poliakine
Founder and CEO, Nano-X Imaging

You know, Erez, what I want to do now is really to take us, you know, to narrow it down. Obviously, Michael's explanation is very, very broad and touches many aspects of our lives and. We are in healthcare, and I think everybody's talking about healthcare and AI, and I would like to try and narrow it down a bit. For that, I would like to ask John Nosta to join us. John is, among many other things, a member of Google.

John Nosta
Founder, Innovation Theorist, and Digital Health Philosopher, NOSTALAB

You want me to share screen?

Ran Poliakine
Founder and CEO, Nano-X Imaging

Oh, let's see. John, can you hear me? Yeah, okay. John can hear me. John is a member of Google Health and also is sitting on WHO tech expert team, and he's also a member of our advisory board of Nanox. He's also a futurist, and I would love to hear his point of view and also the human dilemma, because I think part of what we heard from Michael is also how AI ultimately can replace some of the human aspects. It will be great to welcome now John and to ask him to share his point of view. Hello, John. Hi.

John Nosta
Founder, Innovation Theorist, and Digital Health Philosopher, NOSTALAB

Hello, Ran and Erez.

Ran Poliakine
Founder and CEO, Nano-X Imaging

Hi, John.

John Nosta
Founder, Innovation Theorist, and Digital Health Philosopher, NOSTALAB

It's a pleasure. You know, I'm listening to Michael's comments and my mind is swirling about some of the fundamental aspects of both society and also clinical practice. I think that clinical practice today is really defined by that sense of learning that we need to learn, and I'm going to carry that theme around and hopefully I'm going to close with a comment that connects it back to learning and some of the key insights. Let's talk about what's going on in technology and humanity at a tipping point, if you will. I want to make it clear to everybody that I wanna speak as a techno-optimist today. I know that there are a lot of issues, that there are many dystopian constructs about technology's intrusion into humanity.

In today's world, I wanna look at it from that perspective of a techno-optimist, which I think is a very, very realistic world view. Let's start with this idea, and I want you to take a good, hard look at this cartoon because even though it's a cartoon, it really captures what I believe is the underlying philosophical issue in the world today. We know the caption, "Is it friendly?" The real question here is who is saying it? Is it the robot and the robotic dog? Is it the human who is looking at this robotic creature? I think in the world today, it's probably both in some strange way. Traditionally, we may say that the robot is frightening, but the reality is that the human construct is rather frightening.

When we look at errors, particularly medical errors, the idea of humanity or the clinician driving clinical insights and decisions can be rather frightful. I think that's kind of the tipping point. That's the balance as to where we are. Of course, when people talk about that balance, they talk about it through this long period of human progress where, way at one end, we have the early G word. We have Gutenberg and the printing press and the dissemination of knowledge. Today we are on the other side of that curve where it's rapidly changing. Speed is one of the most important dynamics that is both engaging but also unsettling.

If you look at this from a more basic perspective, when you look at the data, you can actually see what's going on from the printing press up through driverless cars, and we see that reality is very much practical and very real in the world we live today. It's the accelerating growth of technology, and that's the path that Nanox is so uniquely on. It's that notion of digitization that allows us to do amazing things from digitization to dematerialization to demonetization and ultimately democratization. That's throughout life, but also in clinical practice. Now it's been said that data, particularly data in medicine, is both a blessing and a curse, if you will. Data is coming at us at a variety, a velocity, and at a volume that is amazing. It's extraordinary. This data is on the level of something like.

Oh, let's compare it to the third fundamental window into humanity. The first was the telescope, and we all know what happened to Copernicus around that time. The second was the microscope, and the third is the emergence of data. Data is coming at us at a speed that is incredible, but it's not without some of its problems. The data is expressed in all sorts of interesting ways, and I wanted to go out on a tangent here to kinda put this into perspective because it's not only just data, it's not only things like the genome, but it's also the exposome, the clinome, the proteome, the genome. There's all sorts of areas of our physiology that are now coming to life in the context of data and in the context of digitization. Let's talk about the exposome.

What are we exposed to today? Are we exposed to toxins? Are there issues around, let's say, pollution issues? Or maybe we can develop a means of digitizing and measuring the airborne viral burden that gives us a completely new risk assessment. The point I wanna make here is that almost any aspect of our lives is being digitized, and that's driving the acceleration in growth from a variety of perspectives. Now that's the good news and that's the bad news. That data is in fact a profound tsunami of information, and today's clinician is not unaware of this. The emergence of things like the electronic medical record promise to take data and corral it, take data and manage it.

The reality is what we've seen is that while data goes up and up and up, and there's really no end in sight to the amount of data that's coming at the clinician today. The amount of information that is extracted from that data and then the amount of knowledge that's extracted from that data and further, the clinical utility of this data, the clinical utility of genomic testing, of sophisticated blood analysis, of liquid biopsies, is still traditionally low as we see the emergence of change. What we have is a fundamental gap. That gap is a problem, and that gap is at the heart of what I think some of Nanox's great thinking will help us manage. To try to capture that gap in the simplest of terms, the fundamental ability of the clinician to assimilate relevant clinical data into a cognitive workstream is impossible.

Let's take a breath and think about that for a moment. The amount of data coming at a cardiologist, an oncologist, a respiratory therapist, a nurse, any of the people involved in clinical care is almost impossible. They cannot assimilate that cognitive workstream. The inevitable path forward must include technology as a partner in care. Now, just think about that for a minute, a partner in care. Back five, 10, 15 years ago, we talked about collaborative care, the essence of care as part of a multi-team approach. The doctor, the nurse, the psychiatrist, the spouse, the family. Today, that partner in care must include technology, and this really is the domain of AI. This is extraordinarily important, but what does it do? It shifts the reality.

The smartest person in the room, the smartest person in the hospital, the smartest person in the hospital room, is no longer the physician. It's technology itself. It's the computer, it's AI. This is a complicated psychodynamic which will shift some of the aspects of the way physicians think about their practice and the way they practice medicine. You know, the cognitive heavy lifting of medicine is one of the defining elements of being a physician today. It's a profession that demands that you be smart, that you recognize all these different data sources and resources. In today's world, that cognitive burden can be shifted to allow the clinician to assume a new role that is not necessarily less burdensome in terms of cognitive capacity, but expands the role so that the physician can take on a more interesting path forward, if you will.

I think that's ultimately the most important thing about AI in the world today. Some people say it's AI, some people say it's IA, it's intelligence augment, and that very well may be the clinical path forward. It reaches that way through stumbles, and those stumblings have to be managed accordingly. Now, beyond imaging and some of the things we're talking about today, AI is everywhere. I think Michael touched on that very, very quickly. He said that these processing technologies are everywhere. The reason I'm putting this up here is I wanna show you the vast range, from robotics to medicine, to broader visualization, to drug discovery, to genetic-based solutions, to intelligent personal health records. These are going to actually change the game.

From a clinical perspective, what happens here is that the clinician can take on a role of expanded functionality and expanded capability. In the final analysis, it very well make the clinician more human, because what technology does is allow the clinician to see better. Think about that. To see better. To see things on an X-ray. To touch better in microsurgery. To hear better using technology in the digitized stethoscope. The sensory engagement of a physician is actually enhanced and makes clinician more human. The interesting thing about technology is applied to all these areas, is that we begin to see not only what's there, but we'll begin to see what's not there. Because all the data that comes into the pipeline is streamed through medicine, drug development, and all of the life sciences is largely wasted.

When we look at a chest X-ray for pneumonia, we're focused on the pathology. When we look at an EKG for a first-degree AV block, we may not be concerned about the ST segment and ischemia. What we will begin to see is that we can use data to see what's not there and find important clinical inferences. This is a fundamental game changer. I think that fundamentally what we see is that there are articulated disadvantages to artificial intelligence. That there'll be a loss of job, a loss of humanity, if you will. I really disagree with that at face value, because the advantages of efficacy and precision and accuracy are profound and transformative.

The decreased workload, the increased ability to engage in patients, and certainly the improved outcomes and cost savings are what will make AI not only an option, but absolutely a clinical imperative. I wanna wrap things up quickly here with a quote. I wanna go back to Michael, because Michael talked about learning, because at the essence of clinical care is in fact learning. Alvin Toffler, who wrote Future Shock, said something very, very resonant today. He said that the illiterate of the 21st century will not be those who cannot read or write, but those who cannot learn, unlearn and relearn. I think that's the essence of medicine today. As we move forward, as we see these technologies become available, physicians are gonna have to become comfortable with taking off their traditional stethoscope that's 200 years old and use a digital stethoscope.

They're going to have to be comfortable with a differential diagnosis, is not something they learned in residency, but something that is technology-generated that might give them more unique path into the clinical course. So that's our challenge today. That challenge is about unlearning and relearning, and that's the promise of tomorrow. Thank you.

Erez Meltzer
Director and Incoming CEO, Nano-X Imaging

Thank you, John. I think that you actually were talking about the tsunami of data. I think that it seems that this tsunami can be turned to have a real positive impact on our life and the future of healthcare. Ran, I think that not only that the Toffler quote that John gave us, I think it's more than 30 years ago, was mentioned.

Ran Poliakine
Founder and CEO, Nano-X Imaging

Yes.

Erez Meltzer
Director and Incoming CEO, Nano-X Imaging

We are now moving to now. Yeah, we're living in the future already. Thank you, John. That was very, very interesting, and I think very inspiring. You know, look at the future, and I think it is very interesting because many of the things that you're talking about, we're trying to turn into reality, specifically with the proposed acquisition of Zebra Medical Vision, where they're actually doing this today and taking this huge amount of data and turning it into something actionable with AI, and that's something that I would like to invite Orit Wimpfheimer, who is the Chief Medical Officer of Zebra Medical Vision, to come and just share with us a little bit of her experience and the background.

I think it will kind of narrow it down to us from what you said into practicality. With that, Orit, please join us and share with us your experience with Zebra Medical. Orit.

Orit Wimpfheimer
Chief Medical Officer, Zebra Medical Vision

Hello. I'd like to first introduce myself. I'm Dr. Orit Wimpfheimer. I'm a diagnostic radiologist. I trained at NewYork-Presbyterian Hospital, and I've been a radiologist for 20 years. I founded my own boutique international teleradiology company 20 years ago that still functions today. About three years ago, I decided that I want to explore the world of AI because AI is the future. As teleradiology was 20 years ago when I founded my company, now AI is the future, and I really wanted to understand how I can help participate in this revolution and possibly even influence it.

That's how I got into the world of AI, and I'm really excited to be part of this new interaction between Zebra Medical Vision and Nano-X Imaging, because I really believe that the symbiosis between the two companies is gonna be a lot more than one plus one. I'd like to speak today about promoting improved healthcare globally using AI. Zebra Medical Vision is a leading AI company, a startup based here in Israel with about 70 employees with significant strategic investors. What we're most proud of is our 22 granted patents, our many research publications, and our many FDA clearances. We're the only AI company that already has eight FDA-cleared products, one of which I will highlight in the coming slides. That's a lot. Creating AI for medical imaging is challenging.

We've overcome the challenges over the years, and we know how to do AI really well. We're going to use that knowledge and ability with Nanox now. The reason we're so great is that we are based on a significant amount of data. The more data that you have in order to use the data to train the models, the better off your models are going to be. Data is important in terms of ability to understand what's out there. In order to do that, you need to have diversity of data. We have data both from U.S. institutions, Israel-based institutions, as well as India-based institutions, allowing for very heterogeneous datasets, different ethnicities, different sexes, different ages. We have 10-year history for most of the patients that we have, including reports.

That is the foundation, that's the bedrock of our company. In order to train AI, we use that data to make really sophisticated products in order to be able to effectively work on multiple different sites. Our AI algorithms have performed really well in multiple different institutions, really throughout the world. We started our AI company with the concept of doing triage algorithms, which is where all of AI headed originally in terms of imaging. We have those products currently, and they're based in lots of different hospitals, and they're working well.

However, we have learned over time that the really best benefit and best use for AI is really scale, analysis, and data that you can intake and really provide additional new insights for the clinician to be able to treat the patients. About a year ago, we redirected the entire company along that front and tried to focus on population health. Currently highlighted on the slide are our bone health solution and our cardiac solutions. With our products are able to identify early biomarkers on CT scans to be able to really highlight chronic conditions and be able to treat patients. I'm gonna spend more time on that in the coming slides. Now that we joined Nanox, we're going to go in a two-pronged approach.

We're going to go the traditional Zebra Medical Vision approach, where we have CT images and allowing for AI to provide insights, biomarkers available on those CT images, such as bone information for osteoporosis and vertebral body compression fractures, cardiac for cardiovascular disease, and now fatty liver in order to predict patients who are going to go on to have nonalcoholic steatohepatitis and cirrhosis. That's the traditional modality that we're currently working on and really highlighting population health. At the same time, we're going to start developing additional algorithms in order to improve the ability of the Nanox.ARC, and I'll spend more time on that in the later slides. At the end of the day, what are we trying to do in medical imaging? We're trying to help patients.

If AI can do that in a more efficient, effective way, well, then we did our job. There's a lot of information present on a CT scan that any patient takes when they go to the doctor and they need to get a CT scan. Even if you're diagnosed with potential pneumonia, you have a cough, you go get a CT scan. The radiologist is focused on finding your pneumonia, maybe looking for your lung cancer. There's a lot of additional data present on those CT images that often get ignored by the radiologist. Now, I'm a radiologist, and I don't try to ignore anything, but sometimes you're focused on the initial acute problem, and you don't necessarily always realize the subtle findings for chronic medical conditions.

In addition to that, even if the radiologist is really attuned to every little finding, the information gets to the body of the radiology report and often gets stuck there. The clinicians don't tease through the extensive verbiage provided on the radiology report to pull out the information for the chronic conditions and understand from that where the patient should go next. What we're trying to do is change that paradigm. Chronic health conditions are really the biggest problem in medical imaging today. We know how to treat appendicitis. We're really good at treating pneumonia. If you have an acute problem and you go to your doctor, the doctor knows what to do.

The problem is, as stated by both the World Health Organization and the CDC in the United States, we're not so good at really highlighting chronic conditions because they tend to be in the background. They tend to be asymptomatic when they first appear. Really the majority of the morbidity and mortality throughout the world, both in poor countries and in very sophisticated countries, comes from chronic health conditions. And also the money. People are spending lots and lots of money on chronic health conditions because that's what's affecting healthcare. We're not doing a very good job at treating these conditions, and we wanna help change that. What's really fortuitous for medical imaging and for AI in general is that the chest and abdomen CT scan is a very, very, ripe dish for us to work with.

There are a lot of biomarkers present in the chest and abdomen pelvic CT scan that, if AI can really pull out from the data from the imaging and highlight to the clinicians to get patients on the appropriate treatment path, there's a lot of information available that we can really direct patients to appropriate care. We would like the chronic conditions that are available on the imaging, often left on the imaging and wasted data or data that's available in the radiology body, the report that gets stuck in the radiology report, to now highlight those findings, highlight these chronic conditions, and get the patients to the appropriate treatment path that we need.

We already discussed cardiovascular disease, which I will highlight in this presentation, but we have additional algorithms for osteoporosis to be able to prevent hip fractures, and that could be a topic for a different presentation. We're also working on liver disease as well as pulmonary disease as well. Now, how does that work? Okay. Any CT scan that's acquired for any reason at all, whether it's trauma, pneumonia, or anything else, is just data that's available for use. That data is then sent to the PACS, which is the radiology workflow archiving system. The radiologist then reads the report. What Zebra Medical Vision and now Nanox AI would do is scan those images, run it through all the algorithms that we have, and really find the chronic conditions available on those images and highlight that.

That's all a cloud-based system, easily deployable, easily integrated into the typical standard radiology workflow and radiology modality, and highlight that information to be available both for the radiologist at the radiology viewer, but also for executive dashboards. Executives of large institutions can really understand the patient population, the patient risk, try to manage their budgets and their risks, according to the information that they have available to them. Now, I'm gonna spend a few slides now highlighting cardiovascular disease. It happens to be the most recent FDA approval that we had, at Zebra Medical Vision. From my perspective, it's one of the most important ones because cardiovascular disease is really the leading cause of death worldwide. People often don't know they have cardiovascular disease till they have their first heart attack, but why do we need to wait for that?

There are biomarkers on the CT scan available to be able to highlight patients who are at moderate or high risk for their heart attack in the next few years, and let's pull those people out of the community, get them onto the appropriate treatment path that they need because we have medication, and we have cardiologists that know how to treat cardiovascular disease in its earlier form in lots of different ways. We just need to find the patients, get them to the appropriate treatment pathway, and take care of them. Cardiovascular disease affects low, middle, and high-income societies. It affects men, it affects women. We're all affected by cardiovascular disease. This is the example of the algorithm that we have at Zebra Medical Vision. What it does is it quantitates the amount of coronary artery calcium that's available, that's present in the heart.

The amount of coronary calcium, based on numerous publications, is a very good biomarker for your risk for cardiovascular disease. In addition to other factors that you can take into account, such as your age, your family history, your social history, and so forth. Coronary calcium happens to be the leading biomarker. Patients who are getting CT scan for COVID pneumonia, for a rib fracture, have coronary calcium that the radiologist either ignores, mentions briefly in the report, but nothing happens to the patient, and certainly can't quantify and stratify patients into low, moderate, and high risk because radiologists just eyeball the calcifications. They have no real way to measure it. Now that's going to change.

Coronary calcium can be measured using AI, and we've been able to stratify patients into low, medium, and high burden categories, and the burden correlates very, very significantly with your risk for having a heart attack in the next five years. These patients should be pulled out of the community and directed to their primary care physicians or preventive cardiac healthcare units to be able to get them the treatment that they need. In Spectrum Health, we did a study just to show the impact of what our algorithm can do. 549 random CT scans with non-contrast were provided. We ran it through our algorithm, and we worked hand in hand with the preventive cardiologists.

Not only did we see that our algorithm was extremely accurate at 94%, which is extremely accurate for an algorithm, the cardiologists were really shocked to identify that 26% of those 449 patients were at a severe high risk for a heart attack in the next five years, and an additional 17% had a moderate coronary calcium risk. That means that nearly half the people that were identified in this study, just randomly on random CT scans, were at risk for a heart attack in the next five years. Now, all the people who are here listening to this program, if you take the number of people here, about half of you are at risk for coronary calcium or for cardiovascular events in the next five years, and you don't even know it.

This kind of algorithm is supposed to change that paradigm. If you have all the CT scan on the left-hand side of the slide, you allow for a Nanox.AI to run on the CT scan on a regular confluent basis, just always looking at all the CT scans that are available, highlighting the information, presenting it to the radiologist as they read the study. The radiologist confirms the finding. We just need to make sure that it doesn't get stuck in the body of the report. What we created is an automatic ability to insert the text into the impression of the radiology report, highlighting it so that the clinician then has a recommendation as to what to do with the patient after that.

We're working hand in hand with cardiologists to phrase that recommendation and direct patients either to the primary care physician or to the preventative cardiac units, because ultimately, if we don't get the patient all the way to that prevention box, to that red box of the appropriate medication, the appropriate intervention, we didn't really do our job, and that's our goal. Our goal is to get through the entire workflow to get the patient the care that they need. Who benefits? Well, obviously, firstly, the patient benefits. We wanna try to decrease cardiovascular disease, decrease the rate of heart attacks, and that's the best. Ultimately, it's also a business proposition.

Patients who have coronary artery disease can be very costly to the system, and if we intervene, if we treat patients, even with the most simplistic medication of cholesterol-modifying agents, there are many, many published papers to show that we significantly decrease the risk of a cardiovascular event, and therefore we decrease the cost to the system because there's a price tag associated with every heart attack, with every admission for a heart attack, and that's what we're trying to avoid. Who really benefits? Well, the patient, as I said, is number one and forefront. I'm a big believer. I'm a radiologist, I'm a physician. But ultimately, from a business perspective, it's the providers who provide the care. They improve patient retention, improve patient care.

The reimbursements for the medical workup is available because a lot of the patients need a lot of imaging and continued interventions, whether it's angioplasty or stenting or whatnot. We're actively working on creating a CPT code that enables the radiologist to get paid for this additional analysis. Who else get benefits? The payer benefits because the person who's holding the medical risk is the one who's going to be saving money by decreasing the morbidity and the mortality associated with cardiovascular disease. In addition to that, at least in the U.S. market, coding for chronic conditions is a really underappreciated era of financial incentives. There are chronic conditions out there.

Patients are not being coded properly for their chronic conditions, so medical risk holders are actually paying for those patients without getting the reimbursement that they would normally get if it was just all coded correctly. We're able to help provide additional coding mechanisms for that as well. For the IDN, for the integrated networks, well, the IDNs are both the providers and the payers, so they get to enjoy both sides of this equation and both sides of the benefit. Because we saw such a need for this improved coding within the medical healthcare system in the United States, we actually have a separate arm within Zebra Medical Vision to allow for scanning of the images to highlight chronic medical conditions, highlight them such that they get coded properly in the medical healthcare system, so that there's revenue attached to these type.

taking care of these chronic conditions, and there you get the money that you deserve for taking care of such patients in a capitated environment because they do have these chronic conditions. I'd like to move now to Nanox and how I really believe the symbiosis between Zebra Medical Vision and Nanox could be such a great avenue. As population health before was focused on a modality that already exists and is very much commonly in use, population health takes on a different term when you talk about the Nanox.ARC, which is currently under development. The benefit of the Nanox.ARC is to really allow for efficient, affordable imaging and to be placed in both the developing and the non-developed world, to be able to really improve accessibility for the primarily chest and extremity X-rays, which will be tomosynthesis.

Now, why is that important? Because, you know, we live in the United States, and we're used to having a CT scan or X-ray at our disposal, but most of the world is not like that. 2/3 of the world has no significant access to medical imaging, and the ability to enable accessibility to medical imaging is how we're going to improve population health throughout the world. But in order to do that, if you create images, you need to be able to have someone read the images. Now, I'm a radiologist. I know that there are not enough radiologists in the world. Radiologists are in very, very short supply.

The number of people finishing radiology per year has not increased in the last 20 years, so that if you create so many more images but you don't have new radiologists coming in quickly enough, then you're going to create a bottleneck, and that's what the AI is. It wants to solve. How we're going to do that, we're going to use our very, very vast and deep AI technology within Zebra Medical Vision. We're going to train on Nanox images as well as on synthesized images to be able to really highlight the Nanox chest X-rays, tomosyntheses, and the extremity bone X-rays because that's really the bread and butter of radiology.

If most places, if you can get a chest X-ray or you could get an X-ray of your wrist or your arm or your hip, then you've went a long way to really providing appropriate imaging for medical care. We're going to be able to manage those images. Now how do we manage the images? We can prioritize and categorize the images appropriately by defining all the normal cases and separating them out to not necessarily need immediate radiology evaluation, and also be able to highlight the abnormal findings in those cases to really make the throughput for the radiologists more efficient. With the help of USARAD, who's also joining Nanox, we'll hope to make this entire workflow and system a very efficient mechanism.

This is just examples. It's still being developed. I enjoy watching the images every day as I get more and more of them of the Nanox, both the chest X-ray and in this case, in the hand. The nice benefit of the tomography versus a 2D X-ray is you really have a lot more information within the X-ray, and so radiologists will need to learn how to understand this new modality that doesn't currently exist in this form currently in the hospital system, although the technology of tomography has been around for many, many years. I look forward to joining Nanox on this journey to really learn these images, improve availability, and reading of these images using AI, and make population health more effective. Once again, we really believe in population health, both at Nanox and Nanox.AI and Zebra Medical Vision in two different ways.

One way is using modalities that currently exist by highlighting chronic medical conditions and getting patients to the appropriate care that they need, pulling out the biomarkers from the CT scan and getting patients the medication that and the treatments that they need. The other direction is by enabling mass deployment of imaging so that every person throughout the globe can have access to their bread and butter medical imaging, and using AI, we can make that effective. Thank you very much.

Ran Poliakine
Founder and CEO, Nano-X Imaging

Wow. That was amazing.

Erez Meltzer
Director and Incoming CEO, Nano-X Imaging

Really amazing.

Ran Poliakine
Founder and CEO, Nano-X Imaging

I mean, right? It's really amazing to see everything comes together from the idea of AI and machine learning all the way to actionable clinical decision-making based on this technology, and I think that's part of what we tried to achieve today. Really, I will try now, Erez, together with you, to put a frame to all of that. You know, at Nanox, we say together for better health, and that's not a cliché. I mean, we really mean it. So far, we talked about it in the context of a coalition, partnership, et cetera. I think what we introduced to you today is really gathering technology to help us to do that.

'Cause it's very clear that, together with technology, we can do a better job. Let me just remind everybody what you all know, but we are all about accessibility and affordability. To do that, we need to address the real issue. The real issue is that while X-ray technology exists for 126 years, today, there is no sufficient access to medical imaging throughout the world. Simply not enough machines. They're too expensive to buy, to maintain, to operate. Part of what we need to do is to propel the universe with many more, I would say, sensors, using John Nosta's word, more sensors that can sense and take images.

You know, even if we have so many of those devices, it's not enough because the next gating item that Orit just spoke about will be radiologists. There are not enough radiologists. We are going to have huge amount of data. We need to bring some technology to make sense out of it. Again, going back to John Nosta and going back to what Orit said, it's impossible to deal with so much I mean, data is okay, but clinical decision out of this data and knowledge is something totally different. That's where AI is coming to play.

The reality today is that all the buzzwords of, in radiology, I would say, of big data, AI cloud platform, and you know and health economics, that and patient outcomes that everybody's talking about is not really utilized to its fullest because of the lack of accessibility and ability to use technology there. In other words, early detection, which is a very, very attractive and popular word, remains theoretical these days. This is exactly what Nanox is trying to change. Let me just summarize because I believe Orit and John and Michael did a better job than us really talking about it, the approach so everybody will have a clear idea of our approach, how we're addressing this problem. On the left side, it's deep technology. We have deep technology.

It's nanotechnology on a chip. We call it Nanox.SOURCE and Nanox.ARC, and that will address the lack of medical imaging devices. We have plans to populate the universe basically with not less than 15,000 units by the end of 2024, and we are in a very good shape meeting this goal right now. On the right side, however, we have this AI solution that Orit spoke about, and this is where we bring technology and actually give answers to two issues. The first one, the fact that there are not going to be enough radiologists to deal with this data. We build what we call robo-radiologist. These are little robots, radiology robot, that will look at the images and will determine whether the image is normal or not.

If it's not normal, it will go next to a radiologist that may be some of our network, USARAD, or just external radiologist. The other total side is using all the data exist today from CT scans and others, utilizing what Zebra Medical Vision has to determine or to identify a problem in a group or in a specific patient. That's something that Orit spent time to do. All in all, if we take this approach, we believe that we can take this industry many steps forward. With that in mind, just a little bit of architecture, because so far we talk, mainly we focus on the left side of this slide, which is really the Nanox.ARC.

Let me just say the obvious, but each one of those connected devices is connected through the internet to the cloud. In the cloud, there are a lot of things happening from AI and sharing with expert and EMR, et cetera, that enables to come to an actionable decision for this specific patient, but also increase the totality of knowledge. That's what John was talking about in terms of ability to say something significant about a group of patients. That's very, very important. This is really where I want to take it back to the AI. This is, you know, today is Nanox.AI day. I would say that Erez, my partner here, and the incoming CEO was pushing really hard.

Even, I mean, one year ago, this AI thing, as he understood before me, I must say, that this is a significant need for the company. I would like to let Erez really to share his view and take us through a couple of slides here.

Erez Meltzer
Director and Incoming CEO, Nano-X Imaging

Thank you, Ran. I think that in the last 12 months we were searching for the AI company that will be part of our product offering and solution offering. I think that it's more than a handful of companies that we were looking for. The question is why Zebra specifically was the one to be the chosen one. Other than the fact that we found a great management team as well as great talent, really great people, top of the state-of-the-art, top of the line, of people who are developers and algorithm people and AI and deep learning people, this is one. In addition, not less important, this is the big data, okay?

I think the Zebra has probably either the biggest or one of the biggest data sources for images. So think about the numbers, okay? We're talking about more than 500 million images in the database. Second, it's coming from more than 30 million patients records. It actually represents about 10 years of history of these patients. Altogether generated multimodality as part of it. I think that this is part that in addition to what we are going to generate as part of the distribution of the ARC system around the world, this will enable us to make the real change, not only for the population health, but also for the other solution that we're going to provide.

In addition, Zebra has already eight FDA clearances and already they have 10 CE marks. Altogether, they are regulated. By the way, I would mention also the HIPAA compliance that their solutions have indeed. This basically gave us this idea. Last but not least, this is the third element is the platform. These people that generated over the past few years this platform, which is a great platform that will be able to take all this data and to create them analytical tools that will enable all we are trying to do. Now, the question that may be asked, okay, who's going to read all these scans? That's the reason that USARAD was also acquired, and MDW.

These two companies will enable us to have these hundreds of radiologists that will enable us to complete the picture accordingly. We do hope that in the next two weeks, we're going to do the closing of these two acquisitions. You see actually the implementation of part of it, but more to come.

Ran Poliakine
Founder and CEO, Nano-X Imaging

Thank you very much, Erez. I think it's really interesting because for the first time, we actually shared a grand vision, which is not only the deep technology, but also combining that with the AI solution, and that hopefully we managed to explain today why AI is part of our future. The combined solution alongside USARAD and MDW may give a full solution for patients around the world. That's really coming back to what Nanox is all about. You know, we put this phrase that is more and more relevant every day. We would like to scan globally to protect one's health. We scan globally by accessibility, and we protect one's health by huge amount of data that we narrow down to knowledge and clinical actionable decision with AI. That's very, very important to us.

With that in mind, I would like, Erez, to thank you for joining today. I would like everybody that the speakers that joined us today, to thank them. I would like the audience that spent time with us to learn more about Nanox AI, to thank you guys, and simply to tell you that the next time we're going to meet with you is going to be at the RSNA. We have a huge show planned at the RSNA, exciting show, and we would like to really invite you officially now to join us because we have so much to give and show from now on. Thank you very much for joining us today.

Erez Meltzer
Director and Incoming CEO, Nano-X Imaging

Thank you for being part of this journey.

Powered by