Good morning, good afternoon, good evening, depending upon where you're joining us from. I'm Ira Feldman, I'm the tinyML Foundation Managing Director, I would like to welcome you to today's webinar as part of the tinyML Hackathon on Pedestrian Detection that we're running. Today, we're going to talk about the BrainChip Akida Development Kit. Thank you for joining us. Before we start, I would like to thank all the tinyML strategic partners who make this and all our events possible. I'll tell you a little bit more about them at the end of today's webinar, I'd like to thank all of them for helping out. For those planning ahead, our next in-person event is June 26th-28th. Our tinyML Europe, Middle East, Africa Innovation Forum being held June 26th-28th in Amsterdam. Please register now.
Advanced registration is until May 31st. We welcome everybody in person there. It will be a fabulous venue. It's a great technical program with great networking opportunities. Please check it out and make plans to join us in Amsterdam. In terms of this tinyML hackathon, we have resources in terms of dev kits. We've done two of these webinars. They are now available on our YouTube channel, youtube.com/tinyml. Sony talked about the AITRIOS intelligent vision system. Infineon talked about their extensive 60 GHz radar sensor. Today, we have BrainChip talking about the Akida platform. We will also have office hours and ask-the-experts sessions. We will announce those via the contest website. In terms of schedule, we do have the optional checkpoints, including proposals due tomorrow for people who would like some feedback from the judges.
We will have datasets in June, models in July, and devices in August. Just a reminder, the final submissions are due on September 15th. Please keep the schedule in mind and please follow along. In terms of the contest platform, you can go to this webpage. It's being hosted by the United Nations International Telecommunication Union (ITU) as part of their AI for Good initiative. You can go here, and this is where you put your teams together and have all the detailed information about the contest. Today it is my pleasure to introduce Nikunj Kotecha. He is a machine learning solutions architect at BrainChip.
He's spent the past few years immersed in the Akida Neuromorphic Technology, and he's focused on bringing the power of their technology to assist implementing cutting-edge AI solutions. Prior to BrainChip, he worked at Oracle, where he was using new AI-driven processes to streamline clinical trials. With that, I welcome Nikunj. Actually, before Nikunj starts, just a reminder, please use the Q&A window for your questions. You can put them in as he's presenting, and at the end, I will, you know, work through the questions. Go ahead and use the Q&A window for your questions. Nikunj?
Are you able to see my screen?
Yep.
Great. Thank you, Ira, and thank you, tinyML Foundation, for organizing this hackathon challenge. It's been a really exciting way where you guys are driving the machine learning community forward and encouraging all these new developers and existing developers to drive the community forward. Welcome, everyone. I'm Nikunj. Welcome to this webinar. We're gonna be talking about our development kits and talk about how we can help you guys in making sure this challenge is completed by you guys. Before that, let me tell you a little bit about me. I'm a solutions architect at BrainChip. I've been with BrainChip for over two years.
I've worked with the Akida technology, trying to develop new models, you know, for the neuromorphic world and kind of work with industry leaders, to make sure we can integrate their cutting-edge solutions alongside Akida. For today's agenda, we'll be talking about the problem statement again. I'll just highlight the problem statement, what this challenge is all about. I'll give a brief introduction about our company, who we are, and where we are heading to. I'll introduce Akida platform and talk about our development kit so you guys have an understanding on what our platform is all about. We'll talk about how do you guys send in proposals so that you guys can get access to these development kits and can get them started alongside this challenge.
Our external sources, our support platforms, where can you reach us if you need any help from us. We'll also have some sessions for the selected teams who have our development kit. I'll talk about a little bit more on that. Towards the end we'll have some session for questions and answers. All right, this is taken from the kickoff deck from tinyML. Initially the problem statement talks about there have been a lot of pedestrian deaths throughout the country and throughout the world. We're trying to solve this in a way that AI can kind of assist a lot of these vehicles to detect, you know, moving people around or moving bicycles or anything that can avoid these type of accidents.
Mainly, you know, you can have different types of visions, maybe coming from vehicle, maybe coming from a flagpole. There are a variety of different solutions that can be implemented. There were two kind of examples given from the kickoff deck where one, you have some accidents seen in major intersections where there are multiple different crossings, and even between intersections, you see some fatal injuries going on here and there. We kind of try to address all of these scenarios in a way that we can highlight these problems and at the end, try to help or assist them with AI models. For that, we have some judging criteria for this challenge.
We definitely want to correctly classify how many pedestrians we are able to, you know, recognize properly. It may be in different conditions, different lighting conditions, different weather. You know, depending on, you know, different scenarios, you still wanna be able to recognize these pedestrians. We want to understand your flexibility of your solution. Where are you trying to install this? Is it on some sort of a flagpole? Is it on some sort of a different angle view? With that, it also goes on the cost factor, right? How much are you reducing the cost with your innovative solution, not only in terms of just your hardware silicon area, which is the BOM cost, but also in terms of installation and maintenance, right?
That's also one of the key aspect where we wanna do this task, but also wanna understand your cost ratios. What's gonna be the response time? How quickly can you interact? Typically these have to be real time because you have to take actions immediately. We wanna also understand what's your response time based on your solution. With that, let me introduce BrainChip. BrainChip is focused on AI solutions, and we're trying to put these AI solutions that are today working on the cloud towards the edge. We do that with our neuromorphic platform, and eventually we are an IP-based company, which means that we license our technology to other customers who then go and integrate our solutions combined with their own silicon. We are first to commercialize that neuromorphic technology.
We are worldwide leaders in development of these AI chip for event-based processing and learning at the edge. We have 15 years of AI research, and, you know, amongst our co-founders. You know, they are mainly focused on this neuromorphic role. We have centers of excellence in U.S., Australia, France and India. We also have a global presence. We are globally spread out, and we are working with many different solutions and partners. Some of them, we are mainly trusted by, you know, MegaChips, Renesas. You must have heard about Mercedes and NASA. All these guys have used our technology in some sort of a fashion. We also partner with external AI solutions to kind of help drive this industry forward.
We've partnered with a lot of different companies, you know, the big name. Some of them are Arm, Intel. You know, we have partnered with SiFive as well because really we are kind of independent of your host system. We can work with Intel, we can work with Arm-based processors, we can work with RISC-Vies. Other AI solutions as well, such as, you know, we have Prophesee or Edge Impulse. We've also worked with solutions partners at emotion3D and Viso, Texon and AI Labs. Some of our key markets, with this solution, we're really trying to target mainly edge-based devices, and they can be segmented in different areas. Industrial, automotive, health, wellness, and home consumer.
Based on where you are, you know, because it's IP, we can really scale the performance criteria and constraints that each of these different applications might have. Let me talk to you about a little bit more about Akida and the neuromorphic advantage that we see today in the industry. With Akida, we mainly try to do event-based processing on your neural networks. What that means is we only look at activities within your neural networks and do not do any wasteful computations that's not required. With that, we also have advanced spatiotemporal capability, so that way we can account for data where time is most valuable. In many scenarios, this can be used for video analytics, predictive analysis. This is where we can have all these capabilities done inside our computation.
To support event-based processing, we also have an event-based communication. Between different layers that goes inside networks, right? We can make sure that we're only sending in events for neural networks. We're not sending in any wasteful information throughout our technology. We also have at-memory computation. What that means is, it's very localized for every core within our technology. So that way you reduce your data movement, and you are able to reduce power consumption at the edge. Then, something this is been long at the edge, which is on-chip kind of learning. So we again do event-based learning, which supports this behavior of learning at the edge, so you don't have to retrain any model at the cloud again. If you have a new class that you want.
You have identified, we have the ability to do all that training, you know, at the edge, and it can be done in one shot or, you know, a couple of shots. When we look at this Akida technology, you know, it's really meant as a neural processing unit, so it's a really self-managed neural processing unit. When you look at different sensors, right, you have vision, audio, olfaction, gustation, tactile or any other sensor per se, at the end, there is really a pre-processing stage that goes into, and you have a neural network, that gives you a response out. It does not matter to us which sensor you're using, because at the end of the day, we will be activating these neural networks inside Akida.
We'll be able to support these, any kinds of sensors that you're trying to run. Typically, for this challenge, you might focus more on vision or any other sensors like lidar or radar-based or different sensors that can, you know, help you solve your solution. Then we can handle complex networks. If they can run completely on Akida, we'll do that completely. If there are some custom layers per se, where, you know, Akida does not support some of these layers, we can offload those on the CPU and can get the assist from your host and can still run the supported layers on Akida. So that way you're still using it as an accelerator or your co-processor to accelerate your neural networks, maybe not entirely, but at least most of the time.
We can operate in as a standalone when we process the neural network. We don't really, when we are doing the computation on Akida, we don't involve any host or CPU, purely because we wanna limit the bandwidth requirements, limit the power consumption, and give you the best level of performance. Here, you know, when you hear about neuromorphic, you typically hear SNN, Spiking Neural Networks. I wanna show you one classic demo where this is in-cabin monitoring. Here's our partner, NVISO, who's using five different variety of models, and they're kind of doing in-cabin monitoring, where they're identifying. Let me actually start the video for you. They're kind of identifying face, different body points, your hand gestures, it's very useful.
If you are familiar with this solution, it uses at least four to five different AI models, and they're all CNN-based at the end of the day. What's unique about this was they were able to use Akida, where they were able to port all of their solutions onto our development board, which you can see out here. They're just using a regular CMOS-based camera to kind of influence throughout these different kinds of models. They were initially using an NVIDIA Jetson device. If they compare the performance from an FPS standpoint, we were outperforming on their best model and also for combining all of their average models by a lot, right?
If you see, you know, the scale between different devices is also not equal, where Akida is running at the least frequency, whereas the GPUs are running at the max frequency, and the CPU is even more than that. This is more on the Arm-based device. This is kind of a good comparison to show you know, from a performance standpoint, yes, we can get you the performance that you need. You know, maybe you need some real-time FPS. We can definitely get you these real-time FPS with our neuromorphic technology. Why is that happening? Some key differentiation between traditional deep learning accelerators and Akida is when we perform our operations, so if I say for example these convolutions, we do that in an event-based domain. Compared to the DLA, which typically does matrix multiplication, we're only focusing on event-based processing.
Now, when we are able to do that, we are able to utilize sparsity, not just weight sparsity of your layers, but also the activation map sparsities, completely on the neural mesh. I have an example to show how that is being done. It comes in a little slide. We're able to utilize the sparsity throughout, so that if you're, if you're getting there, it will be helping you to reduce your overall math computations that goes on the technology. We are able to run a full network without any CPU intervention, so that's also a bonus advantage to when you're talking about power and bandwidth constraints. We have a self-configuration DMA. What that means is whenever it's configuring a network, it can self-configure itself.
Again, we don't require any host CPU system to kind of help us and assist us in, you know, let's configure this type of networks. It can also, you know, if you have a large model and if you wanna configure it from a chunk-by-chunk basis, again, our self-configuration DMA will completely manage it by itself without any requirements or assist from the CPU. Again, as I said, as it's add memory computation, that really optimizes for memory size and power and then on-chip learning. We don't require any cloud retraining, and we can completely learn new classes on the fly at the edge. Here are some examples about how event-based processing works. This is an example for a convolution operation.
When you look at the top part, which is traditional convolutions, traditional frame-based convolutions, this is how a DLA computes, right? You have an activation map. In this case, it's a 5x5 matrix. You have a kernel. In this case, it's a 3x3 kernel. Whenever you wanna do convolution operations, you take this kernel, you put it on at the center location, and you get the result. You keep doing it row by row, column by column, and eventually you get a resultant matrix out of it. When we move to Akida, we only focus on events in the activation maps. In this case, there are 3 events, which is this portion is out here, and we take the kernels and move the kernels along these 3 events. If you see, we're giving 3 different computations, right?
At the end, we still obtain the same result as a traditional frame-based convolution. The major difference here is in the computation. In this case, there's a 90% reduction, but if you think about an average model, you typically use batch normalizations, ReLUs, all of these focus on centering a lot of these values around zero. A typical model on an average cases may have about 50%-60% activation sparsity. This is the activation sparsity that I'm talking about. You will be reducing 50% compute just by using or converting these models to Akida. Now, if you are a step ahead in the game, you can penalize a model to be more sparse enough. Say you wanna be 80% sparse, and that will reduce the computations further down and also reduce your power.
Naturally, it also helps with your latency because you're not doing that many computations compared to traditional frame-based convolutions. This is how we focus on, you know, we implement our neuromorphic design principles, trying to use today's existing solutions that are CNN-based and more advanced in giving you the best levels of accuracy and apply that to our neuromorphic-based design. When we compare it, how it's reducing the CPU and memory pressure, whenever we run neural networks on Akida, if you see the orange line out here, you have data acquisition, you have preprocessing, but this is the portion where you run the neural networks. When it's offloading onto Akida, the CPU is not doing anything on that point in time.
You can really have a low power state CPU just doing your pre- and post-based operations, whereas the major computation, which is happening millions and billions of time, you can really offload that onto Akida. If you see the blue line out here, this is a example of any computation that's actually happening on a CPU when you don't have Akida on it. You're still consuming a lot of CPU power and there's a lot of memory pressure based on CPU. You know, developers have to write a lot of control logic, make sure the bandwidth is being maintained, so on and so forth. Here's a little example about efficiency. Here's an model which is Akid Net/FOMO, and this is again taken from another partner, Edge Impulse.
This is a lightweight object detection model that does real-time object detection. It gets localization of your objects that you're aiming for. If you see out here, we're trying to identify the Skittles, which are the circular candies in this frame, and it's able to correctly localize where these Skittles are against all different kinds of objects or candies that you see out here. Very useful for like a pre-screening algorithm, or maybe in industrial use cases where it's going over a conveyor belt. In your case, in your solutions, maybe you wanna do this pre-screening algorithm to detect if there's a pedestrian ever in this frame, right? Just get the location where it would be. You maybe like crop out that local location and then extract some more information with complex algorithms, right?
The benefit of this is if you see these kind of algorithms as well, we're able to run these models in real-time with extreme efficiency. Not just these models, it goes throughout different kinds of variety of models. In this case, it's just less than 1 mJ for this example. We typically focus on microjoules to millijoules for many of these models, even large-scale models, such as, you know, MobileNets and YOLOS and so on and so forth. Here are some examples of working with other types of sensors. Here we are working with point cloud-based data. You can think about this coming from maybe a LiDAR-based frame. Here you just have an RGB reference to see what that actual sensor is looking at. If I play this video, you can see that it's...
we're working off this point cloud, which naturally if you're familiar, it already introduces a lot of activation sparsity because it's not like an RGB where you have values on every pixel. You just have pixels on things that are, you know, really being focused on. This again helps with sparsity. Typically, these kind of sensors, when you work and you create your neural networks, you will have about 80%-90% activation sparsity. That's really huge for us because now you'll be able to reduce power by a lot and also gain a lot on your latency. Now on the right you have, this is an Event-based sensor data. If you think about, you know, processing or DVS-based camera. Here's a street view from them. Though we are applying object detection models to kind of identify different...
This is a truck, this is a car, this is a bike. Maybe it's not coming very well in Zoom, you know, Event-based data are again, just like point cloud, where it doesn't give you pixels for everything, but it gives you event intensities and different polarities for things that are shifting from time to time. It's a little different from neural network perspective, but they focus on, you know, what's difference between timestamp 1, timestamp 2, and so on and so forth. We talked about and introduced Akida technology to you guys. How can you use Akida today? We have MetaTF, which is our Akida Machine Learning Framework. This is publicly available. You can even visit that today at doc.brainchip.com to kind of go through our documentations.
We have a lot of examples to help you and guide you. How can you convert your CNN models onto and port onto our Akida platform? If you're an advanced user, you can even look at advanced tutorials. We have tutorials on edge learning as well. We have some installation guidelines, user guidelines, and some constraints that maybe go with our hardware, so you can take a look at those things. On this MetaTF we have three different Python packages. One is the Akida, which has our, you know, runtime engine, so it can do our model inferencing. The nice part about this is it also has a software backend. Along with its hardware, it can also do a software backend. Let's say you don't have the kit right now and then or you're just doing your development.
It's an iterative development that you're doing. Without even deploying your, the model on the development hardware, you can actually use the software simulator to convert the model to Akida to run an inference on the software and check for accuracy or losses or how much hardware resources it would require. You can use all these tricks before even you deploy the model on the hardware. Once you deploy the model on the hardware, we can use the hardware backend, and it will give you the hardware-level performance such as power and latency. The other Python package is Akida Models , these have various APIs. One of them is Model zoo, which has a lot of different varieties of models for different solutions. You can use this for transfer learning.
If you are developing a solution, you like a particular model that's there, use this for transfer learning, so it helps you with faster training, faster conversions, and really an optimized solution that runs on our hardware. There are other APIs in this, such as knowledge distillation, pruning of models. If you are really focused on the later criterias of the challenge, where you wanna focus on your BOM cost or you wanna focus on your real-time latency, you can use all these APIs as well. We have another package, CNN to SNN, which is really like a conversion tool to convert your TensorFlow Keras models, which are in CNN today, and convert them to a Spiking-based or Akida-based format so that it can run it on our hardware.
It also supports tools like quantization, so you can work with 4-bit quantization, 2 bits, 1 bit for both activations and weights. It can help you with all these techniques. It also has extended support for quantization-aware training. All of this is extended from TensorFlow library. Whatever TensorFlow supports today, it supports to 8 bits. We have gone ahead and extended to support 4 bits, 2 bits, and 1 bit as well. It's really an helpful tool to make sure that you can do the right development for your application. About development kit. Again, we are an IP-based company, but we do have silicon, which we kind of use it as a reference-based design.
you know, it's called AKD1000, and we use this for, you know, prototyping, demonstrating solutions, working for exactly these kind of problems where you wanna demonstrate something is working in neuromorphic code, right? It's a silicon which has a lot of different varieties. You have an M-class CPU on it. You have an Akida mesh, which is our, the neural network accelerator. You have other IO ports on it for getting inputs from your host. With this chip reference design, we kind of have multiple different development environments. One of which is so we have this PCI board with our chip on it. So really, this PCI board can get the input from your host. Right now we have a development kit which attaches with a Raspberry Pi OS, a Raspberry Pi development kit.
It uses an Arm-based processor. For you guys, when you say submit your proposals, out of all the proposals, the best three ones, we will be providing them with our entire development kits, which is Raspberry Pi with this board. What this will give you is a setup with an entire Akida platform. We have some sample demos, so you can look at it how we do things. You can play with those demos as well. Your environment will be set up. You'll have your drivers and everything will be set up on your system. You can just have your solution, you're ready to deploy it on this development kit. For the remaining proposals, who do not get development kit, we will be shipping just this PCI card with our chip on it.
This is a AKD1000 mini PCI development board. You can use this on any Linux-based system that has a PCI interface support. We will give you guidelines on how to install your drivers. You'll be able to install these drivers and you can play with those things. You can even use the same Raspberry Pi kit or other kits, but you will have to do all the setups on your basically on your own. Now, when you think about these kits and how do we view our AI solutions, right? How do we develop you a concept and how do we deliver? You can evaluate your models using MetaTF. You can use our partner, which is Edge Impulse.
We have integrated our solutions onto Edge Impulse platform as well, including the development kits. You can evaluate, you know, how your solution looks throughout. You can then go ahead and design your solution based on the development kit you're targeting for. You can then develop it specifically for our AKD1000 chip. Again, you can develop and deploy it using MetaTF, or you can use Edge Impulse as a development platform as well. If you are focusing on this IP, this is where you can scale your solution. Since this is a reference design, you can use this for understanding what your scale of the representation could be, right? You may have a model that does not use all of its cores, uses a portion of its cores.
We have a way to kind of shut down some of these portions and just use those portions. That way you can scale up and scale down based on your models and your application requirements. We can work with different kinds of sensors. I already showed you camera-based sensors, showed you point cloud-based, you know, event-based. There are other sensors. You have radar, you have time of flight, you have IR sensors as well. All of this is possible to work with us. No matter the solution, all of these solutions will work in event-based process, right? Some might work better than the others, just based on sparsity levels, based on your power and constraints as well.
Some might require more accuracy, so you might wanna even combine multiple sensors to one, to one sample image and do those kind of inferences as well. We have external resources. We have a YouTube page, if you search for BrainChip Inc. We have a YouTube page which has a lot of workshops, a lot of the different webinars that we have done in the past as well. We have demos that can help you understand, "Oh, I can also do this kind of things with Aquila." If you visit our website, we have a variety of different blogs. We have blogs with, you know, some with our benchmarking performances, some with explaining about neuromorphic a little bit more in detail by one of our co-founders.
We have a variety of different blogs, that we kind of focus on different topics that might interest you. Check out Edge Impulse as well. We have integrated our Aquila platform onto Edge Impulse. That way you can build a solution with the, with their no-code ML design studio, and can really deploy it onto one of these development kits. You can even before deploying, you can just get the model out of the, out of the studio as well. You can use that platform additionally. In terms of schedule, today we did the development kit webinar. The link for sending your proposals goes live on the tinyML website. If you...
We'll probably post the link in the chat as well. If you go to the tinyML website at tinyml.org/event/tinyml-hackathon-2023 and pedestrian detection, you'll see the link live. There'll be like a Google Form where you can just submit your proposal and we'll go through all your proposals. The last day of submission is 2nd June . We'll go through all the proposals after that, and we'll select the ones that we've for the development kits with Raspberry Pi and PCI board and just the PCI boards as well. We'll be able to, you know, ship them by 8th June . That way we'll contact individual teams and make sure we get all the right details, and we'll be able to ship all of them in 1st week of June. After the shipments, we'll have training sessions.
We'll have two-hour session. This will be detailed session just explaining about MetaTF, our engine library, how to play with certain things. We have an expert, AI expert on the call. We'll go over these training sessions specifically to teams who we have given the development kits. That way you guys, if you have any questions, you can get it done with the AI expert right away. We'll make sure that you guys have the right level of support that's needed. These are the optional deadlines from tinyML for the based on the challenge. Again, you have other after the deadlines pass, you have, you know, shortlisted winners and presentations going around in October timeframe.
For any other external support after the training session, right from today onwards, this email will be active, which is bc-tinymlhackathon@brainchip.com. If you need any help with our development kits or any questions that you have for us, please email us at this email ID. One of our AI experts will get right away and help you resolve your questions and help you understand some concepts as well. It'll be a 24/7 support, but based on different time zones and where the AI experts are, kind of expect a response in 24 to 48 hours from us. With that, thank you so much for attending. If you need more help, you can visit our website, brainchip.com. Our MetaTF platform is doc.brainchip.com.
Please submit your proposals if you're interested in getting one of our development kits for solving this challenge. If you need any support, we have a support email that you can always email to and get back to us. Ira, over to you.
Great, Nikunj. Thank you. We've been answering some of the questions via the, you know, written answers, but let's talk about some of the more detailed ones. So one of the people ask, you know, if you can also couple the Akida with the Jetson TX1, how as one can benefit from a hybrid system, you know, Akida on PCIe and the GPU. Could you maybe talk about that a little bit?
Yeah, that's a great question. Yes. You can couple that in a way such that because it's a PCI board, it still requires a host to kind of give you where your models are stored, and you kind of give your inputs. If the Jetson board has the PCLe slot, which I believe the one that you described has it, you can hook up the PCI board. Our drivers are Linux-based supported, so I think the Jetson has a Linux-based platform. We can install those drivers, and we can help you support if there are any problems with installation. In terms of the model inference, a lot of the networks do run entirely on Akida.
In case if you write any custom layers and you require GPU help or if you require GPU help for your pre-processing or post-processing, we can couple them, and you can really tie up a nice application where a portion of it is running or accelerating on GPU for your pre and post, and then your neural network is being accelerated on Akida. That way you can use GPU only when it's really, and you can reduce your power consumption and Akida when you're accelerating your neural networks and reduce your overall system power consumption from an application standpoint. That's a good question.
Good. Good. Okay. That, that answered that one. Okay. You know, you talked about, you know, people submitting proposals for getting free dev kits. One of the people in the audience asked whether they could just purchase a dev kit. Is there a link or some place, or should they just use the support email to get a link if they just wanna buy one, or?
Yeah, that's a good question. Yes, we do have development kits available for purchase. If you visit our website, brainchip.com, there'll be a section where you can look at our enablement platforms and a portfolio for purchasing development kits. Additionally from Raspberry Pi, we also have an Intel-based development kit, which is Intel Core Ultra Series H based OS. That's useful for, you know, any powerful applications that might be required, powerful CPUs. You can purchase them. If you have any difficulties with that, email us, and we can give you the link for the purchase as well.
Okay, perfect. All right. Then there was a question, you know, if you wanna comment on, you know, Sony talked about their vision-based system, and Infineon talked about the radar system. You know.
Mm-hmm.
You know, the question, how well could you support those on the platform, and what level of integration is there? I mean, we haven't done a close integration, but maybe you can talk about generalities there.
Definitely we've not, we've not done a closed integration with the other partners for this challenge. Being Akida, we support Linux-based OS, right? Whatever host these sensors can work with, Infineon ones and Sony ones, we can plug in our board for the neural network acceleration portion. You can really take in those inputs from those sensors on the provided host platforms and then run the neural network inferencing on Akida. That way, yes, you can combine both sensors from Infineon and Sony and create a nice fusion sample before you submit it to your neural network for acceleration. That may even give you a better level of accuracy if that's what you're looking for.
Yes, you can combine them and use it with Akida. We are not. We don't restrict inputs coming from variety of different samples. At the end of the day, wherever the raw image or sample comes from, there's gonna be some pre-processing, maybe a downsizing, maybe normalization, maybe some other pre-processing that you might use for your application. Good question.
Good. Okay. This one may be a little off topic, but in the samples and in the doc, they asked whether there's example to do speech recognition using the Akida platform. Is there? Probably not directly relevant to the contest, but is there a place people should go to see examples and-
Mm-hmm.
-demos?
Good question. On the MetaTF platform, we do have a lot of examples. We have examples on the vision that object detection that will help you for this challenge, but we have other examples as well, such as speech recognition. We talked about audio classification. We have examples for vibration recognition. In our Model zoo, we also have models for all of these different tasks as well. For this challenge, you probably will be looking at vision and object detection-based models. We have a zoo for that. If you're looking for, you know, outside this challenge, other use cases, we have Model zoo for that as well. That's the right appropriate website to look for examples and, you know, what you can do with different kinds of solutions.
Good. Okay. Rosina just posted the corrected link, forms.gle, so Google Form for dev kits, or you can access it through the tinyML website. I think that's all the open questions. You know, obviously, people can ask directly or through the forum. Thank you, Nikun. Let me take the screen back. Let's see. Okay. We will be posting this information on the contest platform. We will also be posting the video on our YouTube channel in the next day or so at youtube.com/tinyml. This will be archived there. In general, if you have questions about the hackathon or you wanna discuss with other participants, please use the tinyML forum. Just go to forums.tinyml.org. This is the specific topic thread, but you can just find that from the homepage.
If you have general inquiries about tinyML or organizationally in terms of the challenge, please reach out to Rosina. Once again, I would like to thank all the tinyML strategic partners who make this and all our other activities possible. In particular, I'd like to thank the executive strategic partners, including Edge Impulse, the leading development platform from Edge ML. Qualcomm AI Research, advancing AI research to make efficient AI ubiquitous. Syntiant, accelerate your edge compute, making edge AI a reality. Our platinum strategic partners include Renesas, Sony AITRIOS. Our gold strategic partners are Analog Devices, Arduino, Arm, Infineon, Innatera, Microsoft, SensiML, STMicroelectronics, and Synaptics. I'd also like to thank our silver strategic partners, Aizip, BrainChip, GreenWaves, Graviti, IBM, Imagimob, Nota AI, NXP, Polyan, Kioxia, Schneider Electric, and Silicon Labs.
Most importantly, I'd like to thank you all for joining us today and look forward to having you participate in a future tinyML program or event. I wish everybody a good remainder of your day, and thank you again.