IP Group Plc (LON:IPO)
London flag London · Delayed Price · Currency is GBP · Price in GBX
62.40
+0.50 (0.81%)
May 5, 2026, 4:47 PM GMT
← View all transcripts

Investor Update

Sep 19, 2023

Moderator

Good morning, and welcome to the IP Group PLC Deeptech Presentation. Throughout this recorded presentation, investors will be in the listen-only mode. Questions are encouraged and can be submitted at any time via the Q&A tab situated on the right-hand corner of your screen. Simply type in your questions and press Send. The company may not be in a position to answer every question it receives during the meeting itself.

However, the company will review all questions submitted today and publish responses where it is appropriate to do so. Before we begin, I'd like to submit the following poll. I'd now like to hand you over to Mark Reilly, Managing Partner of Technology. Good morning, sir.

Mark Reilly
Managing Partner of Technology, IP Group

Thank you, Lily. Good morning, everyone. Thank you for joining this IP Group webinar. This morning's webinar is an insights session, so this is all about how we're thinking about future technology and the areas that we're currently investing, and the areas that we're looking to invest in new opportunities.

And today is presented by the Deeptech team, the IP Group Deeptech team, whose mission is to deliver value through growing innovative companies that enable and secure the digital economy, create new human capability, and generate prosperity for all. Since we formed the partnerships back in 2018, the Deeptech team has been doing this very successfully and has delivered a lot of success for the group.

You hopefully will be familiar with the fact that we sold WaveOptics, an augmented reality company, to Snap for over $500 million a couple of years ago. That was the Deeptech team's exit, and there's a list of others as well. In addition to that, we sold Yoyo Wallet to Teya, we sold Process Systems Enterprise to Siemens, Re:infer last year to UiPath. And the team has also built a list of companies now that we invested in a very early stage, from seed stage, first investment, that are now sort of 100 million+ companies.

So, great sort of track record of success in that team. We've honed our focus over the period of the team's existence to a set of core focus areas now, which are summarized on this slide. First of all, the team is focused on applied AI, so looking for uses of AI where artificial intelligence can be applied to data sets that are well suited to generating insight to finding insight in a data set that can be commercially valuable, that can deliver a value proposition to an end customer.

And we have lots of companies in the portfolio now that are doing that and that have a proven value proposition, and are generating revenue. And the second area of focus is on the changing nature of our interaction with machines. We think there's this really interesting area of how we're moving away from using keyboards and using mice, and the interaction with the computer is much more like an interaction with a human being, where we're using gesture and voice and expression to interact with computers.

And those two sort of core evolutions of the way that we interact with compute and the way that we use computing power has an implication for the underlying infrastructure and our communications networks and the compute requirement that we have underlying and supporting those use cases. And so they are two other areas of investment to focus for the Deeptech team, and that is really the focus areas that we're talking about today.

All of this demand created by artificial intelligence models and artificial intelligence algorithms on compute power is ever, ever-growing thirst for deeper and more capable compute that consumes a lot of energy. And that also places a burden on our communications networks, where we're trying to shift ever more, ever larger quantities of data.

And today, we're gonna talk about some of the innovations that are required in order to address that ever more pressing need. And to do that, I will introduce you now to Dr. Lee Thornton, partner in the Deeptech team. Over to you, Lee.

Lee Thornton
Partner of Deeptech, IP Group

Thanks, Mark. Thanks for the introduction there. Yeah, so I work alongside Mark at IP Group, as one of the partners in the Deeptech team. We've organized two speakers today to provide some interesting insights into the different aspects of the challenges and opportunities that Mark outlined around, you know, data transmission, data movement, and data generation. Some of the biggest challenges within the world moving to the greater adoption of AI in terms of our computing processes and computing power.

The first speaker will focus more specifically on data transmission and distribution with the world, you know, now well into the zettabyte era, so 10 to the 21 bytes of data being generated and stored in data centers, in the edge, and in devices.A nd often data often used in places where that data is not generated. So there's a huge transmission challenge, both in terms of network management and optimizing network infrastructure.

And as Mark says, our thirst for data remains unquenchable, and that's only gonna get more problematic, how we shift data around the internet to the user and then back to the processor. But that will be a focus of the first speaker. The second speaker will share some of the challenges associated with our current computing paradigm, and the way we currently process is shifting from CPUs to GPUs, and then further on to next-generation computing hardware.

As there is, again, the demand for AI models and different forms of processing continues to increase, and change. We need to think of new solutions, especially in light of, you know, silicon and Moore's Law potentially coming to an end, and silicon nodes getting down to 4nm . There's not much more space to go. We need new paradigms and new ways to process data, generate insights, and, of course, address the energy challenge. There's only so much energy we can consume with all of this, while still being responsible custodians.

The first up, anyway, I'll turn straight to our first speaker. I should say we will do a Q&A panel session at the end, so if you have any questions at any time, please put them in the Q&A box, but I will hold the questions until both speakers have presented, and then we'll come back with Mark. And the two speakers, where we'll answer some of those questions and have a bit of a debate, as I say. So please feel free to add your questions. So the first speaker we have up is Seb Cizel. He's the machine learning lead at Deep Render.

He will be giving his view on video compression technology, past, present, and future. So Seb is originally from Slovenia. Seb has a PhD in string theory from Oxford, and after his PhD, was looking for careers with more concrete, real-world impact, and was converted to be an AI compression enthusiast by the team at Deep Render. And now Seb is leading a 20-person strong team in the engineering department at Deep Render. So delighted to have him here and to provide his insights. Fantastic! Seb, that's over to you.

Seb Cizel
Head of Engineering, Deep Render

Perfect. Thanks, thanks, Lee. So yeah, hi, everyone. I'm Seb, and as Lee said, I'm the machine learning lead here at Deep Render. So first of all, I'd like to thank the IP Group for organizing this webinar, and facilitating this discussion on the future of the internet and how AI can influence the future. So we at Deep Render are very excited to be pushing the envelope of video content delivery by developing world's first entirely AI-based video compression method.

But before we jump into AI, we can ask ourselves, you know, the fundamental question: why video? So, you know, video calls are a staple of our life. The pandemic really introduced the concept of interacting solely via video, and, you know, personally, having family scattered around the world, the challenges of video content delivery became very clear very quickly through that dreaded, I think, you're frozen, message. So I think we, we've all experienced this at various points over the past couple of years.

So we interact a lot via video. When we zoom out from just video conferencing, video data, it actually dominates the data transmitted over the internet by far. Up to 70% of all internet traffic is actually video, and this is rapidly growing year on year. So this underscores the point that dealing with video data effectively at a large scale is essential for a stable growth of the internet. The goal of my talk today is to present the underlying technology that makes this possible, and the challenges of the past technologies and what AI has to offer in the future.

Let's move on to underscore some arguments why compression is necessary. The key takeaway from this presentation, or, like, one of the key takeaways from this presentation is that compression is actually everywhere. Compression facilitates most of the data consumption that we, individual users, consume on the daily basis. You are most likely interacting with something that has a compression algorithm running in the background every single day, and it's most likely for hours.

This is due to the fact that most data that we consume and we generate is video. So anything from streaming Netflix, video conferencing, to cloud gaming, to this very webinar, is backed by video. So in the background of this webinar, there, your computer is churning away and running thousands of compression algorithms every single minute to ensure that all of the slides are actually arriving on time to your screen. So right now it seems to be doing a good job, but if it starts failing, we'll all feel the failure very quickly.

So on a more global scale, these algorithms are run trillions of times every single day. So compression really is the engine that underlies the data transfer over the internet. But why is the compression, why is compression, and specifically video compression, so necessary? So the fundamental underlying fact is that video files are very large. To illustrate just how large they are, we can consider a simple example.

Let's take one hour of raw, uncompressed 4K video, by which I mean a video that comes out of the camera. Suppose you want to stream a Netflix movie that consists of one hour of raw 4K video. How much data is that actually? That actually turns out to be about 3 TB of data if you just wanted to store it on your computer. 3 TB is a lot of data. Definitely doesn't fit on my laptop. It actually fits on, you know, five or six sort of average laptops. It's very hard to even keep this data in its uncompressed form on disk.

Suppose you wanted to stream it in real time without any buffering and lagging, what would that require? Well, that requires an internet connection of 7 Gbps to be able to stream reliably. This is actually a massive amount of bandwidth. Turns out it's about 100x the average broadband in the U.K. at this moment. This pain is not only felt by the end user. It's also felt by content distributors. So every single gigabyte that has to be transmitted incurs a cost.

That cost is the bandwidth cost. So content providers pay a few cents for every gigabyte that leaves their storage servers. So if you multiply that by user bases of 200 million, suddenly it's very easy to reach bandwidth costs exceeding $billions and billions. So that's what underlies the fundamental need for efficient compression, an efficient way to reduce these file size to be able to transmit them over the internet.

Let's see what the past solutions to the problem of reducing file sizes have been and what challenges they face. In the past, a lot of algorithms have been developed to compress videos, generally referred to as traditional compression. Traditional compression has iteratively evolved over the last five decades. The key objective is to reduce the file size while preserving the visual quality of the video. This is common to all forms of compression. You want to preserve the fidelity of the original file while reducing the file size.

However, despite, you know, traditional compression being able to keep up with the data demands of the past, it has some key drawbacks that hinder it going into the future. So first of all, traditional compression algorithms are very complex and hand designed, and have been hand designed over the past five decades. This means that they're fairly inflexible when it comes to dealing with new, pervasive forms of data of the present and the future.

So for every new data modality, traditional compression has to be hand specialized to ensure the optimal compression, and this takes time. It also needs specialized hardware. So there's a high probability that every single device with a screen that you own has a little chip inside, whose sole purpose is to run traditional compression algorithms and encode and decode video on device. It doesn't do anything else. It just deals with video compression.

And the problem with that is that suppose you want to update your video compression algorithm. Well, the specialized hardware is tailor-made for one generation. So for a new generation, you first have to design new hardware. You have to wait for the users to actually get the hardware, which means that the process of actually updating compression algorithms is very slow. And a corollary of that is that there is slow market penetration, a slow response to the key pressing challenges that an increased data consumption poses.

So where does this leave us today? So today, we are seeing two concerning trends. So the amount of internet traffic per month is increasing exponentially. We're generating more and more data, and we are transmitting more and more data to the end users. The reason for that is we are generating higher, higher quality data, and we are introducing new technologies and new industries that are very data intensive.

However, for traditional compression, the relative compression improvement generation over generation has been slowing down, which means that it's not able to effectively compress the data that's being generated, generated right now, and it will be even less equipped to compress the data that will be generated in the future.

So this problem is not a hypothetical problem. So you may recall that during the pandemic, Netflix was actually forced to have the streaming quality of the videos in Europe because the increased network demands that the pandemic posed threatened the existence of the European networks altogether. So this underlies the message that we're at a very critical juncture right now.

So our infrastructure is very strained, and that every single terabyte of new data that's generated is adding to the strain, and the traditional compression is unable to deal with it. So without a revolutionary solution to this problem, we could see the collapse of the internet data infrastructure. What could be a solution to this problem? We at Deep Render are very confident that the solution lies in AI, specifically in AI-based compression.

The key advantage of AI-based compression is that we can leverage the massive amounts of data to develop rapidly updated, highly specialized algorithms that are responsive to the new data that's being generated and allow a much, much better compression ratios relative to traditional compression. We think that AI-based compression is the future. Why do we think that? Well, first of all, we can already show that it works.

We have developed AI-based compression techniques that result in 80% or better compression ratios compared to the pervasive traditional compression that's widely available today. We developed this over the last five years. So compare five decades of development versus 5 years of development, and the result is already 80% better compression. The possibilities for the future are even, even more, even more optimistic.

So one of the fundamental qualities of AI-based compression that is harder to achieve in traditional compression, we can optimize the quality for the human visual system. So we, for example, in this webinar, are able to provide more detail to the key features of video, like face, and ignore the irrelevant features, like the background. The ability to train and iterate on the algorithms quickly, it means, it means that it's flexible and widely adoptable.

And finally, we also have a rapidly growing hardware and software ecosystem, powered by AI revolution, which means that our algorithms will get faster and better simply by the fact of the growing adoption of neural accelerators by end users. So who can benefit from this? So we believe that the benefits are concentrated both in the end users and the content providers. From the end user's perspective, it removes the bandwidth constraints, leads to higher quality content, and a leap in customer experience.

Fundamentally, unlocks new types of content that are very data intensive, for example, cloud gaming and virtual reality. On the other hand, content providers could benefit from on billions of decreased... From a massive decrease in network bandwidth costs, expanded market access, and accelerated adoption of new technologies because they are not mandated to wait for the adoption of new traditional compression algorithms.

So we at Deep Render have an amazing team of people that has been pushing the boundaries of AI-based compression and has achieved several world firsts in the area of AI-based compression. We have a world's first demo of an end-to-end video chat that's entirely backed by AI, AI-based compression, which just yesterday led us to win the Intel Ignite Startup Competition, and we will be featured in the Intel Innovation keynote today. So we believe that we have the tools to provide a solution for the internet problems of the future.

But the aim is just not only the technical improvement, it's basically about reimagining the fundamental engine that drives the internet data transfer, and thereby dramatically improving user experience, opening up new markets, and allowing internet to grow sustainably. So if you're interested about the future of AI-based compression and about the demos of the world's first, AI-based, compression apps that we have developed at Deep Render, please contact us. Thank you.

Lee Thornton
Partner of Deeptech, IP Group

Thank you for that. Very insightful. As I mentioned, we'll take some questions at the end after the next talk. So please put those in the Q&A if you didn't already. Next up, we have James Spall, who is a co-founder of Lumai, and he will give his thought on the next generation of AI, and whether we continue with analog processing. Interestingly, James' first experience using it was to help identify the creation of top quarks from the Large Hadron Collider at CERN. But his second experience was somewhat more esoteric.

As a data science internship, he used AI to help create custom dog food subscriptions. Probably not something we do with IP group, but very interesting nonetheless. James is now just finishing his PhD from the University of Oxford, working on optical neural networks before he joins Lumai full-time in the autumn. So James, over to you for your presentation. Thank you.

James Spall
CTO and Co-Founder, Lumai

Great. Thanks, Lee. And yeah, big thank you to the whole IP Group team for the invitation to speak today. So as Lee mentioned, I'm one of the co-founders of Lumai. So we're a spin-out from Oxford, creating the next generation of hardware for AI using optical computing. So today, I'm gonna be talking about kind of the processing side of AI, kind of the really crucial, kind of fundamental, underlying, underpinning aspects of AI is the processing speed.

So I'll be looking at kind of the current trends and challenges, and then looking at why I believe kind of analog processing is the next step for AI hardware. So we are kind of undoubtedly seeing a time when kind of both the pervasiveness, but also the capability of AI is, is just exploding, right? It's increasing at some incredible rate. Kind of just one example here, I think it's kind of... You're now obligated to use ChatGPT as an example of, of, of the, the capability and pervasiveness of AI.

But it really is amazing kind of how quickly this has unlocked people's awareness of the capabilities of machine learning. I think it's, you know, one of the fastest adopted technologies on the planet with ChatGPT, kind of 100 million users in two months. It's, it's just incredible how quickly this is taking off. And there are plenty of other, other examples besides... As Lee mentioned, you know, I've done everything from kind of-...

Fundamental particle theory, it's all the way down to just can we create better dog food subscriptions for people? It's incredible, the kind of the vast array of areas that this is going to change. What really drives that capability is the raw compute speed, right? By that, I mean the amount of computation, the number of processes per minute, per second, that you can perform, so in some given time. Every kind of big step change in the capability of AI we've seen has been associated to a few orders of magnitude increase in the raw compute speed.

One of the major drivers in that increase in processing speed has been a change in philosophy of the hardware that we use to do AI. So we're no longer running these, these kind of machine learning models on general purpose computers, on CPUs, on your laptop or so on, but, moving towards more bespoke, specialized hardware. So overwhelmingly, in the data center, using GPUs, so graphical processing units.

And including the name, they're graphical. These were kind of originally designed for high frame rate, better picture quality rendering in games and graphics, but they're kind of ideally suited for AI, in the sense that they're very good at doing operations in parallel and doing things at, kind of at the same time, in a very efficient, parallel fashion. So they've been adopted for use in AI, but we're going further still. So you may have come across the term ASIC, application-specific integrated circuit. The application specifically here being AI.

We're seeing companies like Google, Amazon, Microsoft, Meta, all the big players are developing their own chips that are optimized just for the certain aspects of their machine learning workloads that they need. All right, so we have this trend of going towards more bespoke, more specialized hardware that's much faster and more efficient for AI. But the growth of AI is kind of putting such huge pressure on developing these new types of processes that are better and better suited to AI.

And the issue is here, all of these processes so far use digital computation. These digital arithmetic, binary processes, turning everything into zeros and ones, and that's really important. Kind of rewind 50, 60 years, that's really important in terms of the growth of computing as a whole, in being able to have very high precision, very flexible, programmable computing. But it turns out that's not particularly what AI needs, right?

AI just needs to be able to do very specific arithmetic operations, can be relatively low precision, and the kind of this idea of precise digital computation for AI, it doesn't match very well. And actually, the need for energy efficiency and ultra-high speed compute outweighs that need for programmability, flexibility, and precision. So that's why I see kind of the future direction of this is to go to even more bespoke, even more specialized analog processes, that kind of match the physical characteristics of the neural network and the physical needs of AI.

So why is this kind of not happened already? If that, if that's the case, why, why are we not seeing analog processes everywhere so far? And so I'm gonna kind of break this down into kind of three kind of key areas of, of why now, why the future of, of AI hardware is analog now. So there's kind of, there's a need for it, there's a, there's a new market for it, and we have the capability now that we didn't before. So when I say there's a need, what I mean is, there's kind of real fundamental limits and, and issues with digital computation, right?

There's some real fundamental issues, and the first one is just kind of physics. So the way that we build our processes now, based on kind of silicon chips, is kind of approaching, as, as Lee mentioned at the start, you may have heard of Moore's Law, is approaching its limits. The number of kind of transistors that you can fit onto a chip is basically how small can you make each transistor? The smaller your transistor, the more you can squeeze in, the higher the density, and that's been doubling every two years or so.

That's, that's Moore's Law, is that the transistor density has been doubling every two years. But if our transistors get to a point where they're so small, they're only a few atoms across, which is exactly where we are now, you reach a, you reach a hard stop, right? You reach a limit. It's incredible, the fabrication technology has allowed us to get to this point where each transistor is only a dozen atoms or so across. But if you get down to kind of, you know, a few atoms, there's no way you can push that further, and that's a real issue.

And that kind of ties into then the real, kind of fundamental issue that the data centers and, and, you know, compute providers are facing now, is the energy consumption. So each one of those processes using a passing current through each transistor, takes a tiny bit of energy, but if we're doing this on the order of trillions of transistors and trillions of operations, that adds up really quickly to kilowatts of power per processor, right?

We're not just using tens or hundreds of these processors in each data center, we're using tens of thousands of processors. And so when you add up all of that energy consumption together, that's the real limiting factor to how much processing power you can squeeze into one data center. It's not, it's not footprint, it's not anything like that. It's the energy consumption. So there's kind of two sides of this. One is, that's obviously terrible for the environment, right?

There's a huge carbon footprint associated with these things, and it's just simply unsustainable from that point of view. But it's also unsustainable from this need to push the compute speed further. The way you get better AI and the way you increase the capability of AI is you increase the compute speed. But if you're fundamentally limited by the energy consumption and the hardware that you have, you just can't push that any further. So just kind of a few astonishing kind of facts I found about this.

Kind of each data center is using megawatts of power. It's equivalent to running, like, the energy needs of a town or a small city, just for one data center, just for AI. And kind of somewhere between 10%-20% of global electricity is gonna be going towards AI data centers by kind of the end of this decade.... There's this great one from the Republic of Ireland, has many data centers. That's a large part of their economy, and already 14% of their electricity just goes to running AI.

It, it really is, you know, kind of incredible, incredible numbers. So on that one hand, you know, digital electronics is putting huge strain on our infrastructure. There's real push for alternatives. But then at the same time, is there a kind of a market? Is there a space for specialized bespoke hardware like analog processing to actually play a role in AI? And the answer is yes, because of kind of two factors.

One is the need of AI is really simple, right? The kind of the underlying operations that are being performed is just really simple arithmetic. It's multiplication, addition, and that's about it, right? In the form of matrix multiplication. And so having kind of a niche processor that's only doing a few operations is fine for AI, because you only need to do a few operations. It turns out that analog processors can do arithmetic fantastically well in a very efficient, very parallel fashion, and so it maps really nicely onto performing machine learning and performing AI.

The other aspect is kind of whether you actually have enough need or enough kind of total processing power required for such a niche bespoke processor. If you kind of had just general independent devices, as I've kind of touched on, it doesn't make sense, or it's very challenging to have a bespoke processor in individual devices, in individual consumers' hands. But we're seeing more and more this growth of cloud compute, right?

This idea that you, you don't do your processing locally on your device, but you send it via the internet, via the connectivity that we know all of our devices now have, and it all gets co-located into a single data center. And so that, that kind of, that growth of the cloud compute has provided this perfect market, this perfect opportunity to house these thousands of, of co-processors and specialized hardware for AI, that we, that we haven't had previously over the past few decades.

And that's exactly why the likes of the names I mentioned, Amazon, Microsoft, and so on, are developing their own chips, because there's such huge need for all of this processing, all located in one space, that, that it unlocks that ability to, to fully utilize bespoke and specialized hardware. So we kind of-- we've seen there's a need, the decline of digital electronics.

There's a huge market from it for AI in the cloud, but do we have the capability? Do we have the kind of the technology to really introduce analog processors? And so this is where I'm gonna introduce optical computing. This is what Lumai are working on, is this idea of using optics to perform computation and not digital electronics. And so it's exactly the same kind of benefits and ideas that we have in the communications industry. You no longer transfer and move data around on copper cables.

You use fiber optics, right? And exactly the same benefits in that, in that domain, you get with processing as well. So on the one hand, it is an analog computation, and as we've seen, it maps perfectly into the processing and the arithmetic needed for AI. And at the same time, you get huge advantages in terms of throughput and latency. So the clock speed of optics is much faster. You can use things like multiple wavelengths, which you just don't have in the digital domain, to get much higher throughput, much lower latency.

And what's really critical is the power consumption, right? So when you're into the optical domain, you don't have issues of heating like you do in digital electronics. You know, there's no passing a current through a resistor and getting heating effects like that. You don't have energy use when you pass a beam through a lens or reflect a laser from a mirror or something like that. And so that's really important, is the energy efficiency of these analog optical processes can be orders of magnitude larger.

So to kind of summarize there, we see kind of the limits of the current hardware has provided this great opportunity, right? Kind of there's this brick wall that Moore's Law is about to hit in terms of transistor density and energy consumption. And there's a real need, kind of moving forward and looking over the next few decades, what is our computational architecture gonna look like? And I don't think it's just digital electronics.

Then at the same time, the growth of AI and the pervasiveness of AI and the move towards the cloud and the data center has provided this perfect market of needing bespoke, specialized hardware that can do certain operations extremely well. Kind of optical computing and other analog devices, but especially optical, is gonna be a really important aspect of the next generation of computing and in filling that opportunity in that market. But there are other challenges, right?

It's not all just about processing. There are other considerations that we need to address. So one of them, we've mentioned, is networking, right? Data to and from the cloud. So Seb kind of touched on this, the importance of just getting that data around and moving that data is a huge challenge. And as we move towards these kind of ideas of cloud compute and data centers more and more, then that networking becomes a real pretty important aspect.

The other aspect is within the data center, as I mentioned, we're not using a handful of processors, we're using tens of thousands of these devices, and so kind of scaling that number of devices in a sustainable way and how you interconnect those different devices is very important. And finally, as we do move to these concepts of kind of bespoke, specialized hardware, you know, analog devices, digital devices, optical, electronic, kind of efficiently connecting and converting between those different processes is really important as well.

And that's also part of the, part of what we're working on at Lumai, is how do you integrate these new devices seamlessly with the rest of the compute stack? So I'll finish there. I think that's what we've got time for. Thank you very much for your attention. If you have any questions, please do reach out, drop me an email, or visit our website.

Lee Thornton
Partner of Deeptech, IP Group

Fantastic. Thanks, James. Could I just ask Mark and Seb to come back in? Fantastic. We'll take some questions and have a bit of a discussion now. So again, you can add questions. I see there's already some come in, on the Q&A box on your screen to the right, and I will take the relevant questions and pose them to the group.

So I think the first one there is probably quite specific to you, Seb. Are there any solutions in the pipe to filter only the video image of a person in the foreground of a video call, while effectively filtering out other people in the background? Do you have any comments on that? Does that make sense?

Seb Cizel
Head of Engineering, Deep Render

Sure, yeah. So there are definitely solutions like that that we're thinking about. So the advantage of AI is that you can do this end to end. We can basically learn a lot of this just by showing the model the data that you want it to perform well, so... And showing the model what it needs to focus on. So if you are able to convey that information to the model, focus on faces, ignore details, you can actually do this end to end. So this is definitely, definitely an approach that we're thinking about.

Lee Thornton
Partner of Deeptech, IP Group

Okay, thank you. Question that often comes up about AI regulation, and do you think AI needs to be regulated? I mean, people have views on this, but any particular points anybody wants to raise? Can I ask you, Mark, first, anything, anything to add to that debate?

Mark Reilly
Managing Partner of Technology, IP Group

Yeah, I mean, at the high level, the answer is yes, right? It does have potential to be very dangerous, and that's something we ought to do something about. I mean, it's a big question, and it's a bit like saying, should the internet be regulated? And yes, it has potential to be dangerous, but it's not an easy thing to do, and there are dark corners of the internet that's still very, very dangerous today. It's good to see the U.K. is making sort of proactive efforts on this.

We've got the UK AI Safety Summit coming up in November, and a chap called Matthew Clifford has been appointed as the Prime Minister's envoy to that summit, which is a good move. I think he's a very smart guy and has very good kind of thinking on this topic. But yeah, it's a big challenge, and it will continue to cause threat and opportunity no matter what regulation we apply.

Lee Thornton
Partner of Deeptech, IP Group

Thanks. James, do you have any view on...?

James Spall
CTO and Co-Founder, Lumai

I mean, yeah, well, yeah, I think it's kind of... It's quite hard to think about AI safety as just kind of one topic, because there are so many different aspects to it, right? There's the kind of misinformation side of it, ChatGPT telling you things that aren't true, and can that be used to kind of, you know, do harm in various different ways.

Lee Thornton
Partner of Deeptech, IP Group

Mm-hmm.

James Spall
CTO and Co-Founder, Lumai

Then there's also kind of quite a lot of people talk about the kind of existential threat that we kind of see from AI, and that's a very different thing, right? There's kind of the immediate issue of misinformation and so on, and there's kind of much longer term, if we keep pushing AI as we are, does it at some point get to a level of human intelligence and all sorts of those kind of conflicts?

So there's a huge range of things, and they all need to be considered kind of in different ways, right? If you you should- you wouldn't need to regulate AI safety from the point of view of existential threat right now-

Lee Thornton
Partner of Deeptech, IP Group

Mm-hmm.

James Spall
CTO and Co-Founder, Lumai

Whereas maybe we should more and more with the miscommunication, the misinformation.

Lee Thornton
Partner of Deeptech, IP Group

Yeah.

James Spall
CTO and Co-Founder, Lumai

Um-

Lee Thornton
Partner of Deeptech, IP Group

Yeah, interesting. Seb, any view on that?

Seb Cizel
Head of Engineering, Deep Render

Yeah, I think I agree with James here. I think there, there's a spectrum between sort of more hypothetical threats and more concrete threats from AI. I think inevitably, the regulation of AI will have to take some shape or form, but it seems to me that it's more fruitful to focus on the concrete issues.

Lee Thornton
Partner of Deeptech, IP Group

Yeah. Yeah.

Mark Reilly
Managing Partner of Technology, IP Group

Can I ask Seb and James what your view is on the existential threat? I mean, clearly, we're a long way from it. It's only the illusion of intelligence that we're seeing in AI today. But at the same time, I'm not able to identify any kind of fundamental physical barrier to a computer becoming sufficiently intelligent that it could sort of outsmart us and provide that existential threat. How do you feel about that? Is that...

James Spall
CTO and Co-Founder, Lumai

I think that's people's concern, is that it at some point, it just falls off a cliff, right? You don't see it until it's too late, broadly. I think that's the-

Mark Reilly
Managing Partner of Technology, IP Group

Yeah.

James Spall
CTO and Co-Founder, Lumai

Scary thing. That's what people are concerned about, right? It's kind of... It doesn't just kind of incrementally get towards human-level intelligence. It's- it will suddenly appear.

Lee Thornton
Partner of Deeptech, IP Group

Yeah.

James Spall
CTO and Co-Founder, Lumai

And so it's, it's kind of preempting that and getting the regulation in and the, the safety nets in ahead of time before that happens, that's, that's really important. And that's, I think, a big, very growing field of research is just on the AI safety. How do you develop these neural networks and these AI models in a safe manner from the fundamentals, rather than trying to bolt on safety features after the fact? So it's a very, very interesting area of research. Yeah.

Seb Cizel
Head of Engineering, Deep Render

Yeah, exactly. I think especially with sort of, with the advent of the large language models, we are all fascinated by the fact that we are able to interact with what is essentially a computer in a conversational manner. And, you know, language was one of the big frontiers of intelligence. You know, just thinking back to the Turing Test, language is something intrinsically so human that when a machine started seemingly being able to use language, this was a massive paradigm shift for most of us.

So I do think that research in that area is important. I do think that sometimes conversations can get overly focused on some hypotheticals and not sufficiently focused on the concrete issues that are already a problem. You know, that's why basically we need to do a lot of, a lot of research. People working on Large Language Models need to do research both in safety right now and safety in the future.

Lee Thornton
Partner of Deeptech, IP Group

Thank you. Just changing track slightly, you both talked a bit about the technologies in your companies. What do you think? Do you have any view on what else needs to be developed outside of your specific areas? What are the technologies need to be developed to see, you know, really widespread adoption of AI models, both century and at the edge? What else needs to happen, do you think, for what you guys do to become reality? I'll put that one to James first.

James Spall
CTO and Co-Founder, Lumai

Yes. Well, so kind of as I mentioned, a huge part of it is the kind of the interconnection of devices, right?

Lee Thornton
Partner of Deeptech, IP Group

Mm.

James Spall
CTO and Co-Founder, Lumai

So the development of the infrastructure and the protocols of how do you move data around efficiently. So I said, kind of touched a lot of it on the kinda compression side. I think that's kind of a large part of it, is moving the data, but also the kind of the compression of the AI models themselves.

Lee Thornton
Partner of Deeptech, IP Group

Hmm.

James Spall
CTO and Co-Founder, Lumai

Right? So, you know, ChatGPT, GPT-3 uses 175 billion parameters or so on.

Lee Thornton
Partner of Deeptech, IP Group

Wow! Yeah.

James Spall
CTO and Co-Founder, Lumai

By making the number of parameters bigger, the model is bigger, the AI gets better, but then you have all these constraints on your hardware. So getting the same level of kind of AI ability shrunk down into smaller compressed models will be really important. So those are kind of the two things for me, is the networking and the interconnection of devices and the-

Lee Thornton
Partner of Deeptech, IP Group

Interesting. Yeah.

James Spall
CTO and Co-Founder, Lumai

Kind of the compression of the actual models and the actual algorithms themselves.

Lee Thornton
Partner of Deeptech, IP Group

Seb, do you have any comments on that? I mean, is there any incompatibility issue with, with compression and AI, or is it the two things just go hand in hand together, that one doesn't negatively or impact the other or anything?

Seb Cizel
Head of Engineering, Deep Render

No, I think. Yeah, I think in terms of compression and AI, it is. This is a very clear application of AI technologies. It's a very clear way of benefiting, and especially in much like large language models were able to succeed based on the large amount of data that is available. Compression also gets better when you have larger amounts of data to model.

So I think from that perspective, compression really is ripe for the picking right now. As with most AI at this present point in time, there is a question of hardware that will allow the end users to be able to use this on their devices.

Lee Thornton
Partner of Deeptech, IP Group

Hmm.

Seb Cizel
Head of Engineering, Deep Render

Much like what James was talking about in his presentation, is like, the hardware is getting constrained. However, there are solutions in the pipeline that will be becoming more and more prominent over the next year or the next couple of years. I mean, Apple devices already have these NPUs that are pretty much specialized for these neural network model executions.

Lee Thornton
Partner of Deeptech, IP Group

Okay.

Seb Cizel
Head of Engineering, Deep Render

It's a rapidly growing e-ecosystem, and there's a lot of innovation happening in the hardware space right now as well.

Lee Thornton
Partner of Deeptech, IP Group

Yeah. Just to... That's an interesting one, and then just slightly, I'm just reading the questions, so there's a few around Quantum Computing. Maybe it's one for you, Mark. I mean, where do you think, if it's too much of a broad question, but where do you think Quantum Computing fits into this? I mean, it's, it's part of the, part of the puzzle, no doubt. But, I mean, do you have any view on Quantum Computing and AI or Quantum Computing as, as the future landscape? Well, where does it fit in, do you think, in particular?

Mark Reilly
Managing Partner of Technology, IP Group

Yeah, I mean, I think we've made some really interesting breakthroughs in quantum computing over the last several years, and including companies in the IP Group portfolio, and so we are- we're getting closer to having a useful quantum computer, but we're still some distance from that point.

And quantum computers will solve a subset of problems, a subset of specialist problems, and so I think, you know, they will form part of our kind of future compute in 10 or 20 years time. But we'll-- it'll be horses for courses, and there will be other computing technologies that will sit alongside that. I think it'd be great for James to comment on that and how-

Lee Thornton
Partner of Deeptech, IP Group

Yeah

Mark Reilly
Managing Partner of Technology, IP Group

Lumai is on the side.

Lee Thornton
Partner of Deeptech, IP Group

Yeah, somebody's asked specifically about optical neurons and the link with quantum. Is that a—

James Spall
CTO and Co-Founder, Lumai

Yeah, yeah

Lee Thornton
Partner of Deeptech, IP Group

to make?

James Spall
CTO and Co-Founder, Lumai

Well, yeah, so kind of, to be completely clear and clarify, Lumai is not quantum, right?

Lee Thornton
Partner of Deeptech, IP Group

Mm.

James Spall
CTO and Co-Founder, Lumai

So what Lumai is doing with our optical systems is not quantum in any sense. What the kind of overlap is, is that quantum will require new forms of hardware that digital electronics doesn't do, right?

Lee Thornton
Partner of Deeptech, IP Group

Mm.

James Spall
CTO and Co-Founder, Lumai

So, there are many optical computers, optical quantum computers out there that uses light and uses photons as the quantum, domain. But you have trapped ions, you have superconducting rings. There are kind of many different hardware implementations of quantum, of which optics-

Lee Thornton
Partner of Deeptech, IP Group

Mm

James Spall
CTO and Co-Founder, Lumai

Is one. But optics itself gives so many benefits, as I mentioned, in the kind of the communications industry is the best example of that, in terms of energy efficiency and throughput. That just using optics for classical computing, for the arithmetic needed in AI, you still get huge benefits. So it's very much kind of a classical computing for classical AI-

Lee Thornton
Partner of Deeptech, IP Group

Mm

James Spall
CTO and Co-Founder, Lumai

As opposed to quantum computing, for kind of new quantum algorithms and those sorts of ideas.

Lee Thornton
Partner of Deeptech, IP Group

Yeah.

James Spall
CTO and Co-Founder, Lumai

But I completely agree that quantum will definitely play a role in, you know, as kind of a subset of many different things. You'll have AI, you'll have supercomputing-

Lee Thornton
Partner of Deeptech, IP Group

Yeah, exactly

James Spall
CTO and Co-Founder, Lumai

Ultra-high precision HPC, and you'll have quantum all kind of working, working together, definitely.

Lee Thornton
Partner of Deeptech, IP Group

Hmm. Do you have any comments on that, Seb, in particular? Does it have... I mean, maybe that's a bit far off to apply compression technologies to quantum, but I, I don't know if there's a, there's a link there or anything to comment on.

Seb Cizel
Head of Engineering, Deep Render

So I think we're very excited about any potential gain, potential opportunities that could arise from quantum computing. Right now, we are not really focused on that sphere. I think we are. It's gonna be a while before end users are gonna have quantum computers. So when that happens, it will unlock a lot of different possibilities. But at this point in time, we are mostly interested in productionizing and really getting to the, like, getting our methods to work on existing infrastructure.

Lee Thornton
Partner of Deeptech, IP Group

Mm-hmm. Mm. There's an interesting question coming here for, I think, more towards you, Seb, about, if the compression delivers a human-centric visual image, how does this compromise AI algorithms that rely on a bigger data set to improve recognition or bring insights that human observers may find more difficult?

I guess that question is around the narrowing of the data when you compress it, and then the input that might be to an AI model. I don't know if you have any comments on that, is around your methodology and whether that's true or not.

Seb Cizel
Head of Engineering, Deep Render

Yeah, this is a very interesting question. I don't think we actually know the answer to that. So the point about compression is that depending on what your target application is, you can always tailor the quality, you know. You can definitely change the models based on what makes sense for the particular application. So not all compression algorithms would be very tailored to one specific mode of data. For instance, for this webinar, it makes a lot of sense-

Lee Thornton
Partner of Deeptech, IP Group

Mm-hmm

Seb Cizel
Head of Engineering, Deep Render

To really focus on our faces because, you know, we have the blurred background here. We don't really care what it's doing. Generally speaking, backgrounds are not really that important, and what's more important is for us to be able to really detect, you know, facial features and being able to-

Lee Thornton
Partner of Deeptech, IP Group

Mm, mm

Seb Cizel
Head of Engineering, Deep Render

Detect micro expressions. So that's the key to the quality, and that's also a property of our visual system. But for other applications of data, you might want a different set of features. So if you want to generate data for machine processing, you can tailor your algorithm to maintain a different set of features, and pick a different trade-off in compression.

Lee Thornton
Partner of Deeptech, IP Group

Mm.

Seb Cizel
Head of Engineering, Deep Render

It's a definitely important question because whenever you start messing with the data distribution, you are messing with the future generations of your models.

Lee Thornton
Partner of Deeptech, IP Group

Yeah.

Seb Cizel
Head of Engineering, Deep Render

We don't see this to be a big issue at this point.

Lee Thornton
Partner of Deeptech, IP Group

Hmm. Okay, that's really insightful. Thank you. An interesting question. There's a comment around analog computing, and somebody mentioning they came across it in 1972, and it's great to see a return. I mean, is history repeating itself a bit here? James, you're nodding. I mean, is it... It is the right time, is it, for analog computing to making a comeback?

James Spall
CTO and Co-Founder, Lumai

Yeah, I think so. I think the kind of, like I mentioned in the talk, the people were very excited to move away from analog because it has its issues in potentially being general-purpose computing. It just doesn't make sense, right?

Lee Thornton
Partner of Deeptech, IP Group

Mm-hmm.

James Spall
CTO and Co-Founder, Lumai

You run into issues of how do you do logic, how do you do control, all of these things. And the trend was always to kind of more towards more and more precise arithmetic, more precise modeling, more-

Lee Thornton
Partner of Deeptech, IP Group

Mm

James Spall
CTO and Co-Founder, Lumai

More precision. That's what digital unlocks, right? You can, if you can turn any number into as long a string of zeros and ones as you like, you can effectively have unlimited precision.

Lee Thornton
Partner of Deeptech, IP Group

Mm.

James Spall
CTO and Co-Founder, Lumai

The cost of that is that you need longer and longer and longer strings of zeros and ones. It takes more and more time to process. It takes more energy to process. And so what we're seeing with AI is kind of the complete reversal of that. People are realizing, for AI, you don't need that length of precision. You don't need for 64-bit, 32-bit. We're down now to basically 8-bit precision being used in AI and AI models.

And if you don't need that level of precision and that really precise kind of flexible control of doing whatever you need, if you only need low precision arithmetic, it doesn't make sense to use that technology, that platform, that's really slow and energy intensive.

Lee Thornton
Partner of Deeptech, IP Group

Yeah. Does the same argument apply to quantum? That's a question that's come in there. You know, quantum systems, does it... They don't need perfect precision, perhaps, and so this might help aid their... This methodology might help aid their implementation more broadly.

James Spall
CTO and Co-Founder, Lumai

Yeah, I mean, it's interesting. Well, for kind of the interplay between quantum and AI is really interesting. There's kind of the two-way round, right? There's AI for quantum, applying AI ideas to make better quantum machines.

Lee Thornton
Partner of Deeptech, IP Group

Mm.

James Spall
CTO and Co-Founder, Lumai

Also, can we use quantum computing, quantum machines for AI? And those are two very different areas that require different thought processes, but kind of the more exciting one of using quantum machines for AI is still really new. It's a very, very young area of research because it's not clear, kind of, the advantages you get from quantum machines is not clear how to directly map that onto the ideas of AI.

So it's a tricky field and a tricky comparison. But broadly, yes, I would say the low noise or the... Sorry, the low precision or the kind of noisy nature of quantum would lend itself well to AI more to than to other problems. Yeah.

Lee Thornton
Partner of Deeptech, IP Group

Great. Any other comments there? Do you have any comment on that, Seb, or?

Seb Cizel
Head of Engineering, Deep Render

Yeah, no, I think James spelled it out very well here. I think, yeah, quantum is... being able to use quantum computing in AI, it seems quite far away at this point, at least in any general capacity.

Lee Thornton
Partner of Deeptech, IP Group

Mm.

Seb Cizel
Head of Engineering, Deep Render

It's a very exciting area of research, but,

Lee Thornton
Partner of Deeptech, IP Group

Thank you

Seb Cizel
Head of Engineering, Deep Render

... a bit in the future.

Lee Thornton
Partner of Deeptech, IP Group

Is that anything you've come across, Mark? Is that an area you've touched on yet or seen any opportunities?

Mark Reilly
Managing Partner of Technology, IP Group

No, not really. No, I agree. It still feels some way off. Yeah.

Lee Thornton
Partner of Deeptech, IP Group

Maybe that's the next thing to look out for. Yes, some slightly easier questions, perhaps: Who do you think are the big winners and, and losers in the AI transition? I know that's asked a lot. Who are the winners and losers as we move towards more AI? I'll go on, Mark, I'll give you the first crack at that one.

Mark Reilly
Managing Partner of Technology, IP Group

Thank you. I'll take the low-hanging fruit, and then the others can. I mean, there's a pretty accepted truth that some Big Tech, there's only a relatively small number who can sort of build these big, large language models, and they're gonna kind of own that space to a large degree, and that's sort of the five or six, five or six companies. I, you know, clearly the hardware makers are making hay, and NVIDIA being the standout example of that right now.

That's just the tip of the iceberg, and I think the implication for James and his, for Lumai, you know, that does play a key role now. We are not hardware agnostic in this new generation of use cases, and it's gonna be a really key element. And then the losers, I mean, the people who sort of fail to innovate, isn't it? It's people who are falling behind because they're trying to do by by human power, what can be done by by artificial intelligence and not adapting quick enough to the the efficiencies of that frame.

Lee Thornton
Partner of Deeptech, IP Group

Mm-hmm. Seb, do you have any comments on that?

Seb Cizel
Head of Engineering, Deep Render

Yeah, I think the key, the key advantage in the, in the AI world is having the first mover advantage, being able to innovate rapidly and, being able to deliver the products quickly. So, you know, various large language models. The key success behind large language models was placing a lot of research behind a friendly user interface and pushing it out to people that were able to interact with.

So yeah, basically, I... What I would add is, it's also important for the consumer hardware to have the relevant accelerators, because all of these, all these models, as James alluded to before, we are very, very big and very hard to run in their native form on consumer devices. But eventually, you'll wanna have your personal assistant on your phone. You'll wanna have video compression running on your phone.

So we see that vendor companies that have thought about placing chips in consumer devices are definitely having an advantage in this AI-heavy intense atmosphere that we're in right now.

Lee Thornton
Partner of Deeptech, IP Group

Hmm. James, any views on the winners and losers in the AI race?

James Spall
CTO and Co-Founder, Lumai

Well, I just kind of the similar trend of, yeah, like the - this idea of kind of democratization of the compute is really important. If you, yeah, like, like Seb mentioned, if you, if you can only run these things in a warehouse filled with 10,000 GPUs and it costs GBP 1 million to build, obviously the winners are just gonna be, you know, the, the five companies you can list on one hand. If you, if you need this kind of access to the, to the compute in order for everyone to win, right? And that will only come from making the individual, the single component quicker, the individual processor quicker.

Lee Thornton
Partner of Deeptech, IP Group

Mm-hmm.

James Spall
CTO and Co-Founder, Lumai

If you could run ChatGPT, kind of, on a single GPU, everyone could go out and buy a GPU and run it on their computer at home.

Lee Thornton
Partner of Deeptech, IP Group

Yeah.

James Spall
CTO and Co-Founder, Lumai

Who knows what you could come up with, right? So it's not just about filling warehouses with more and more GPUs, it's about making the GPU itself quicker.

Lee Thornton
Partner of Deeptech, IP Group

Mm-hmm.

James Spall
CTO and Co-Founder, Lumai

It therefore won't look like a GPU, it will look like whatever the next generation of hardware looks like, be that optical or something else. So-

Lee Thornton
Partner of Deeptech, IP Group

Hmm.

James Spall
CTO and Co-Founder, Lumai

Yeah, that's that.

Lee Thornton
Partner of Deeptech, IP Group

No, no, that's good. That's good, good insight. Yeah, I like the idea of that. One last question before we call it a day. Where next, people are gonna be asking this, I think, through and through, where next? What, what's the next ChatGPT? You know, what's the next... Where, where next for generative AI, next field, next application, where should we be looking? What's exciting? I'm gonna put that to you, Mark, first of all.

Mark Reilly
Managing Partner of Technology, IP Group

No, somebody else go first on that one.

Lee Thornton
Partner of Deeptech, IP Group

James. Go, James, go.

James Spall
CTO and Co-Founder, Lumai

Yeah.

Lee Thornton
Partner of Deeptech, IP Group

After you.

James Spall
CTO and Co-Founder, Lumai

So for me, it's multimedia. I think lots of people are saying this. So, ChatGPT is just text, right?

Lee Thornton
Partner of Deeptech, IP Group

Yeah.

James Spall
CTO and Co-Founder, Lumai

You put text in, you get text out, but it's gonna be applying that to images and to video as well. As Seb said, everything is now so reliant on video. That's the next big thing, is this generative video and generative imaging. So yeah, for sure.

Lee Thornton
Partner of Deeptech, IP Group

Yeah. Seb, any other comments?

Seb Cizel
Head of Engineering, Deep Render

You know, from Deep Render's perspective, I'd say AI compression, of course. But yeah, I second that, multimodal, like, in terms of capturing the imagination, I think being able to combine audio, video, and text is the next big challenge, and companies are already working on that. So in terms of, yeah, capturing the imagination, being odd as people were with ChatGPT, I think multimodal is, is gonna be very interesting.

Lee Thornton
Partner of Deeptech, IP Group

No, it's great. Thank you. Mark, any final thoughts on that?

Mark Reilly
Managing Partner of Technology, IP Group

Yeah, I'm really excited about the sort of evolution of spatial computing. The Apple Vision Pro device that came out this year, I think, is the beginning of a different mode of interaction with computers. And then when you think about the combination of AI with that and the sort of computer-generated content that comes with that, we start existing in these worlds that weren't invented by a human, they were invented by an AI, and that's a really interesting area of evolution.

Lee Thornton
Partner of Deeptech, IP Group

That's great. Thank you. Well, I'm gonna finish that. We're bang on 10 o'clock. Thank you, everybody. Thank you, Seb and James and Mark. Really appreciated your participation today. We had some great questions, some really good insights there. So thanks again, and thank you all for tuning in and listening to our webinar. And yeah, you can keep up to date on IP Group news and activity on our website, and hopefully, I'll see some of you around. Thank you.

Moderator

Mark, Lee, Sebastian, James, thank you for updating investors today. Can I please ask investors not to close this session, as you'll now be automatically redirected to provide your feedback in order that the management team can better understand your views and expectations? This may only take a few moments to complete, and I'm sure will be greatly valued by the company. On behalf of the management team of IP Group PLC, we'd like to thank you for attending today's presentation, and good morning to you all.

Powered by