Good morning and welcome to our Q1 presentation for 2023 from Terranet. My name is Magnus Andersson, and I'm the CEO of Terranet. Joining me today is also our CTO, Nihat, who later on will give some updates on product development. We're very thrilled to present to you today our progress during the first quarter and also highlight some more things coming this year. For agenda today, I just wanna shortly describe Terranet a bit, but then later on, move into the financials for the first quarter. Give you some highlights on the activities achieved during the first quarter, including also giving some updates on product development. Last but not least, give you some insights to our focus areas going forward.
This webcasts will also be available on Terranet's website and on YouTube afterwards. Who is Terranet, and what are we doing? We believe that we really can change the future of urban road safety, really saving more lives, especially in the cities. Our product, BlincVision, which is a product scan, actively scanning and also using computer vision and AI to identify and detect objects and how they move in the traffic. Basically, what we see is that for sure, every meters matters and every split second counts. A bit short also about Terranet. We have our headquarter in Lund in Sweden. And also, we have an office in the heart of a automotive center in Germany down in Stuttgart, close to companies like Mercedes, Porsche and Bosch.
We do believe that we have a very unique, groundbreaking sensor technology, which really will make a difference compared to what exists on the market today. Our target customers are OEMs, so automotive manufacturers, and also the main suppliers to the industry, Tier 1 suppliers. We are listed on Nasdaq First North Premier Growth Market. Today, we are 13 employees and growing. We actually just signed a few more people, which I'll come back to later. We're also assigning quite a few strategic partners around the world to help us with the and also speed up the commercialization process. If we move on to the financials for the first quarter and look at count some key numbers.
We can see, in the first quarter, the revenue and the profits and loss is more or less the same as previous year. We have some small revenue from a Vinnova research project. In terms of the cash burn, we had SEK 8.8 million after the first quarter, which is similar as previous year as well. Our cash balance by end of the quarter was SEK 17.8 million. If we look at our finance and investments activities, we had our TO5 warrant program in the first quarter, we had a quite good subscription of 70%, which gave us SEK 3.6 million.
We also did amortize a bit on our loan with this outcome from the TO5 and the outstanding loan after the quarter was SEK 29.7 million . What you may notice this morning, we had a press release about a rights issue we're planning. Here, I'll give you some highlights of this rights issue. It's a rights issue of SEK 75 million+ also warrants, two warrant series, TO6 and TO7. With a set share price of SEK 0.18. This rights issue will be used to also refinancing the loan and after we have repaid SEK 15 million of this.
In the rights issue, basis is that basically if you have four shares, it will give you opportunity for five shares, 3+ 3 in the TO6 and TO7 as options. The subscription period for this rights issue is between 29th of May and 13th of June. Already, this is very good. We also already have underwritten around 70% of this rights issue. Also, we also got confirmation from our largest owner, Madeby Capital, to actually commit to their pro rata shares of SEK 13 million. Then also potentially in guarantees. This makes us very comfortable that we will get the financing needed to proceed with our plans and launch the product BlincVision.
The prospectus for this rights issue will be released next week. As I said, if you want more information, you can also read our press release this morning on this rights issue. Let me now start to talk about some achievements during the first quarter. If we start to talk about our product development, we're very happy that we received our first prototype for our laser scanner. As I mentioned to you before, we have a very good lab here in Lund, actually a 30 meters lab, so we can test the prototype very well. We put it up in the lab, and we tested it, and it looks very promising.
We use our existing sensors for it as well, and also of course, our software. All looks very promising on the scanner side. On the sensor side, we have conducted some feasibility studies, as we mentioned before, and we have concluded some of them, want to do a little bit more of that as well, to looking at what are the best options for the sensor. The good thing is that we have several alternatives. Last but not least, during the first quarter, we also done some progress on our software side. Nihat will tell you a bit more about that later.
I should also mention that during the first quarter, that also, our partner company, Holoride, they launched their product in November in Germany, but they also, in the first quarter, in January, launched their product in U.S., and also with a retrofit product which fits all vehicles. That's also very exciting news. The other area I would like to talk a little bit about is our organization and the team. During the first quarter, we also have some positive news on recruitment. Actually, this happened after the quarter. We got a new chairman, Torgny Hellström, I believe he will bring great experience in the communication process, also with his great experience from other tech companies.
Also good news is that our previous board, Chair of the Board is also staying on the board and continue work with the company. We also did some good recruitments on development, and we managed to hire a lead engineer here in Lund, which also comes with automotive experience. That's also very positive news. Also, I added another AI engineer to our team. We already have some, but adding more staff there is really good. Last but not least, we're also extending our sales network, having partners around the world, and we did assign a person over in Asia to manage that network, but we also have a lot of network globally, so that's very positive.
Last but not least, if we talk about the how we interact with the industry, during the first quarter, we did some investor presentations, one here in Lund for Aktiespararna. We was very appreciated. We also did attend some conferences event, Tech.AD in Berlin, and it was very nice to see there that our product is still very unique, and we are on the right path, and it is very much appreciated amongst OEM and Tier 1s. Basically, what they're most saying is that, "Hey, can we come to Lund and come to your lab and see your concept?" That's what we all planning for.
we did also had some presentations for some industry networks and startup accelerator programs, for example, MobilityXlab, which is also very good to be part of to reach many OEMs and many Tier 1 suppliers. In part of that, we also progress some individual meetings with selected Tier 1 suppliers to discuss our partnerships moving forward. Very good progress during Q1 on this area. now, let me hand over to our CTO, Nihat, and he will give you some updates on the product development. Nihat?
Thank you, Magnus. Welcome and good morning everyone from my side as well. My name is Nihat, I'm the CTO of Terranet, responsible for technology and product development. I'm going to give you an update on the status of our product development on four slides. Let me start with, you know, asking the question, what makes BlincVision different from other vision sensors out in the market today? What are our unique selling points? We are very fast. That means low in latency when it comes to image acquisition. It means capturing objects in front of the vehicle, both their location, their shape, and also the classification of those objects in the end, whether it's a car, a pedestrian, et cetera. We claim being very accurate.
Our scanning concept allows us having sub-pixel resolution in the image acquisition itself. Our system is robust and reliable when it comes to interferences from other systems or even our own systems, which makes the entire system very reliable operating out in the roads. The third point is smart. Yes, smart because we are able to reduce the data that needs to be processed by an onboard unit. We send out processed data, smart data, rather than chunks of raw data. That results in small data sets, in a lower bandwidth that we require in contrast to other sensors. We can have very high data upgrades. As you might remember, on the next slide, I'm just going to repeat the logical components of our BlincVision system.
On the right upper side, you see the emitter. We have, as Magnus mentioned, finished our scanner prototype that we operate in our Lund lab, having five infrared laser beams being ultra-short pulsed, you know, sweeping ahead of, you know, the vehicle. We have our sensor, together with our technology partner, Prophesee, we've been able to increase the sensitivity and also the responsiveness of the event cameras. We can capture those 3D objects using our new laser scanner, which we have been able to verify and prove, and still projecting the sweeping patterns, using those five infrared beams. On the lower left side, that's the processor that comes, right, you know, in the camera logic. We have improved the performance of our noise filtering.
We are prepared to take all those algorithms and put them on logic, which means on hardware, in order to parallelize and accelerate, and achieve even lower latencies there. For example, porting, you know, our algorithms on FPGA devices. Last but not least, of course, the recognizer. We have designed our new scanner version and also the sensor. We modeled those and embedded them in a visual simulation environment that is tailored and designed for ADAS and autonomous driving use cases. I will talk about that on my last slide, let's go back to the processor. This third logical module that we call processor is responsible for noise filtering, which means any light that does not come from our emitter.
Finding, reconstructing the trajectories that we send out, and doing the triangulation, which means doing the actual 3D vision. Although we apply, of course, optical filters in front of our event sensors, there's lots of light that still comes in, especially when it's sunlight. By applying some software logic, we can filter out the noise because we know exactly what we send out and can therefore do some spatial temporary filtering of events that we don't wanna see. The software runs in real time, it today runs on CPUs and standard GPUs, but we're going to port them, as I said, on logic devices, which are much cheaper and easier to integrate than, for example, a GPU.
In the coming weeks, we plan to port those algorithms to these hardware-accelerated platforms and take full advantage of the so-called parallel edge computing on embedded devices. However, you know, for the product to be well accepted in the automotive domain, we need to complete these optimization tasks and integration tasks as soon as possible. These are the next step when it comes to the processor. Yeah, on my last slide, as I mentioned, I'll show you some captures of a virtual car drive equipped with BlincVision on board in a simulated traffic scenarios. These snapshots reflect the real behavior we expect from our system in terms of object detection, classification, and also the latency measurements and collision detection in this simulated environment.
We will continue doing these simulations, improve our models, which means having a model for our real scanner, having a model for our event-based cameras. You know, switching to an even more professional platform later until we have a test vehicle that is driving on a proving ground. There's a lot to expect in this domain over the next weeks and couple month. Thanks for your attention. I'm handing back to Magnus.
Thank you very much, Nihat. Let us now talk about the focus areas going forward, and what we are working on at the moment. The first part, if we look at the partnerships and what we need to basically develop this product. As we mentioned before, we work close with Prevas to create the scanner prototype. We also continuously look for other partners to do this full product. At the moment, if you talk about the sensor part, as Nihat mentioned, we're working very closely with Prophesee, but also in the feasibility studies looking at other options as well. Continue and work on this partnerships to secure our supply chain is important. The other part is, of course, accelerate the product development.
As I mentioned before, we have our lab upstairs, which is very good. To get some kind of demo there is key for us to show Tier 1s and OEMs coming visiting us. That's the focus as well. As I mentioned before to you, our aim is still that we will put something on a vehicle by end of this year to also try it outdoor on a moving vehicle. Lastly, I should also mention that we are also looking at other options on how we can bring a product faster to the market because of course, we want to get it out there. We are also evaluating different options for that.
If I move on to talk a bit about the sales and the commercial side, we are very active out on event and conferences for many different reasons. Not only to network with Tier 1 and OEM customers, but also to really see that we still are unique and that we still are doing the right things, and also to see what potential competitors are doing. I must say we see that we are in very well positioned and still very unique, and the interest is high. We are also in this next coming quarter attending more events and conferences, and we also joined the Business Sweden Autotech Program in U.S. so that we will join that with several small and medium sized tech companies from Sweden with the Swedish government initiative.
In May, we will attend the VECS in Gothenburg, a big conference which is important. We will also attend and also Nihat will speak at an ADAS conference in Stuttgart in June. Not only participate, but also very nice to see that we are an appreciated speaker at different conferences. Last but not least, we continue to have individual meetings with Tier 1 suppliers and OEMs to secure our partnerships and in end of that, moving to an A-sample together with the Tier 1s. Last but not least, also continue to grow our team. Ensure we have the competence we need, not only for today, but also for the future.
We have quite a few open positions on our website for the developer team, but we will also look at strengthening our commercial side of the team as well. That's our focus areas moving forward. Last, but as well, is to talk a bit about the industry. I must say, I'm amazed to see what's happening in the automotive industry at the moment. It is really a transformation going on. The term, the amount of electronics being added to vehicles, the topics where we're talking about software on wheels is really happening. Also, if we look at the cities, they do get more and more complex. It's really forcing the automotive industry to move forward. I heard several OEMs really feel that we need to push the boundaries.
Also, of course, talking about safety systems. That we believe that also many different set of sensors will actually make the car safe. BlincVision will be needed. Of course, that also EU has set the Vision Zero, zero fatalities for 2050, is also driving the change in the industry. Stay in contact with us for sure. And as, I mean, as you see, we do press releases, but of course, also please especially follow us on LinkedIn. I think yesterday we posted something on the panel that Nihat attended in Frankfurt, yesterday. Please follow us on LinkedIn and other medias. Also, if you have any questions, just, contact us.
Now, let's just move over to a Q&A of this webcast and see if we have any questions from the audience.
Hello, Magnus. Thomas here. Moderating the questions here. They wanna know a little bit more about when they can expect any results or communication about the laser scanner.
Yes. That's a very good question, Thomas. Thank you. As I mentioned, we have a prototype up in the lab, and we have seen some promising results, and we hope to publish that very soon.
Okay. Back here. Also if you could, if you could elaborate a little bit about this, the senior advisor that you brought on, to the business network. How will that affect the business development? What, what do you see? What is it that this person can contribute in Asia?
Yeah. No, that's good. Thank you for that. I mean, this person has a great experience from working at OEMs for many years and also with Tier 1 suppliers, right? Helping us finding the right way to this partnership is key. When I mean the right way, it's also that we speak to the right people. It includes speaking to very senior persons who are ambassadors and can sponsor initiative. It's also talking to the right engineers and the right department to get our product in as fast as possible. I believe that this person in Asia and this type of network will help us getting faster to market.
Good. Nihat, a double question for you here. You have said you're resilient to interference and also what in your technology is it that really makes you faster than all the other LiDAR systems that are currently out there in the market? It's a double question. The interference thing and the fastness.
Yes. Let me try to give a quick answer on both questions. First, coming to the robustness. Our unique scanning and also object perception concept is actually based on two separate units, like the emitter and the receiver, that are not dependent on each other. It means, even if you have 100 of vehicles, you know, emitting the laser pattern and receiving it in the car again, they don't interfere because we don't make any difference where that light is emitted from. Even better, the more cars which send out, you know, our laser pattern, the more we would see. This is different from, for example, LiDAR, where you send out something, you measure and do your histograms, and then you come to a conclusion.
You actually have to know this is my signal that I sent out. That's the basic difference. It's more or less embedded in our concept. It's not something that we do by technology itself, it's the concept which is unique. Why are we faster? I would say the best way to explain it is that we can get rid of frames. Any of those vision systems out there, they're based on frames, which means you collect data, you wait until it's collected, you send it out, and then you do it again. The iteration of getting, acquiring images frame by frame. That gives you a certain barrier, what you can achieve in terms of speed. What we do is we don't wait for frames.
Our concept is to get events as they arrived, pixel by pixel, independently, asynchronously. You take that stream and process it immediately. Also on the processing and object, detection side, we don't wait for frames. We keep that advantage from capturing until detecting those objects and come to a conclusion in terms of object classification, position, speed, et cetera. That is an inherent and character of our system. This is why we can be faster than comparable systems. I hope this was not a too long answer.
It was a long answer. It was a good answer as well. Thank you, Nihat. A question back to you, Magnus. When can you expect commercial progress and break even?
Yeah. I mean, that's two questions again, I guess. What we see there, a product out in a car in a volume production will take a couple of years, okay? That's the quick answer. The other answer is that, of course, when we start showing the market that we have this very unique product, you know, some transactions may happen before it's out in volume production, okay? I think the interest is great out there. It's hard to predict when on that side.
Thank you. Another question to you, I think, Magnus. Has there been any progress on the work with the regulations?
Yeah. I mean, that's a very good question, Thomas. Thank you. I think, as I said before, the industry is in a transformation, the cities and environment goals and to have zero fatalities in cities and in the traffic is happening, right? Therefore, regulations are following that process. You can see that more and more regulations coming in when it comes to traffic safety. Very interesting, we had a meeting just with Euro NCAP very recently. What we hear from them is that where the most accidents and the most deaths are taking place in traffic is in urban areas, is in situation where speed has been the limitation, the technical speed of the cars has been the limitations.
If we can lower the latency, if we can make the systems much more faster, that's in huge interest for Euro NCAP. Euro NCAP, as you may know, is a consumer index that all OEMs want to achieve five stars, right? For vulnerable road users, the systems today achieve that. The new regulation coming out or Euro NCAP in 2026, it will be tougher and BlincVision will be needed. We are working very close with Euro NCAP and of course with industry to show what can be done technically, so they also can have a regulation coming in line with that.
Interesting. Okay. Last question then. Back to Nihat. On the data simulation compared to real world simulation, what are the downsides? I mean, does this really work to simulate the reality in the data world?
Yes. I mean, any simulation, as the word says, is a model of the real world, it never comes as close to the real world as you want. It accelerates our product development. We can refine our components like the scanner and the sensor without spending days and weeks in putting that on a vehicle and doing test drives. By the way, before we can do test drive, we have to get this system certified, you know, then also integrate in the vehicle. We do simulation in order to win some time, gain more experience, keep those development cycles shorter. Let me just give you a few examples where simulation can't help.
Like, a simulation environment doesn't reflect the physics of the real world 100%, like when it comes to reflections of our signals on a wet surface or whether when we simulate crashes and collision scenarios, then the vehicle doesn't behave like a real car. I mean, it's very close, but not real. What we expect from our simulation is that we come as close as possible to speed up development, and in the end, being faster to the market. This is something done by many of those ADAS, and especially autonomous driving companies. As I mentioned in my talk, we are currently using more end feed level simulation environments because they're totally okay for us.
In a few months from now, we're going to switch to a more professional platforms that are offered by a few companies out there. I would say simulation is good until you are able to go on the road, and that's will be our trigger to switch the systems then.
Thank you, Nihat. That was the last question and last answer. Over to you, Magnus, to round up. Thanks.
Thank you very much, thank you also for all the good questions. Thank you for tuning in today. We at Terranet, we are very proud on what we achieved the last year and also the last quarter, we are really excited about what's coming to happen the coming months. Please stay in contact, we look forward to speaking to you soon again. Thank you