Hello, everyone. Welcome to our virtual launch event here today of new groundbreaking hearing aid technology. I'm sure you've all seen our announcement this morning where we introduced a new flagship hearing aid across all four brands in our hearing aid business. We're very excited to be here today. I have President and CEO Mr. Søren Nielsen with me right now at the tables. We also have Finn Möhring, our Head of R&D, and Thomas Behrens, Chief of Audiology, with us, and they'll also be going through the new technology with us. So normally, we would meet you face to face to have a session like this. 2020 is very different, so it will be virtual. I'm sure that will work, but we also look forward to meeting you again on the road, hopefully sometime next year. So the agenda for today looks like this.
Søren will give you a brief introduction to what it's all about in a minute. After that, we'll have Finn Möhring coming on to tell you about the hardware side, the chip platform. We will have Thomas Behrens explaining the audiological concept thinking behind the new products. And then Søren will take it back to discuss the actual launch and the rollout of the products. And there it's important to mention that we'll be using a standard dial-in setup, so you will be using your phone to dial in. The phone lines are open. You can start queuing up. You can find the dial-in information in the investor news this morning, but also on our website, of course.
So please stay on the audio signal from this webcast to begin with and throughout the presentation until we actually switch to Q&A because there may be a little bit of a delay between the audio signal from the webcast and therefore video alignment into the phone line. So stay on the webcast for as long as possible. We have set aside a maximum of 90 minutes for the entire session including Q&A, so you should have ample time to go through it all. That's all for me. So Søren, please take it away.
Thank you very much, Mathias, and warm welcome, everybody, to this great and fantastic day where we can shed light on the new product groups, new generation of hearing aid technology that we today start launching across the world, things we are, of course, very excited about and where we today will try to also with you to share more flavor of what it is that we believe makes these products and these many product families very unique and groundbreaking, where we once again have taken the benefit of new, very advanced technology in order to deliver better outcomes for hearing impaired all around the world, which is the core mission of the company. Before we do so, just a few words, and it will be very few, on the continued corona situation.
We have, as you're all aware, seen a growing number of cases here in the fall, not least the last two or three months in Europe and also in North America. The industry as such has found a way to deal with COVID. We have been allowed to stay open. Senior citizens have not been told anywhere to stay completely at home. So far, we have not seen any material impact from the latest infections. However, it is worth noticing that, for instance, a market like the U.S. is still in a slower than the rest of the world recovery, still dealing with it, and have not yet reached last year's level. But all in all, the recent development has not led to any significant changes in the way we operate the business. That's it for COVID for now.
Now we move on to new hearing aid technologies, and I'm going to share a little bit of highlight before I pass on the words to experts that can take you into further details. It is truly groundbreaking. It is very innovative, what we present today. We have once again looked deep into tomorrow's technologies and said, how can we, as early as possible, with the biggest possible outcome and power, apply that to hearing aid technologies and make sure this can be utilized to make better products for people that are hard of hearing. And the core of what we try to achieve is what we call audiology. It is helping people out, hear better with less effort, in the most difficult listening situations. We, of course, also join the continued progress in rechargeability, in connectivity, in how you do a fitting and service system, including remote care.
These are all things that are improved and progressed in the new technology platform, which we'll also talk a bit about what they call it is the audiological progress. It is where we move the bar and where we see the true difference between different products. As I've said before, most of the things around will, over time, quickly become commodity. We basically build connectivity, rechargeability, and to some extent, cloud utilization to enable remote care out of exactly the same technologies and tools. So the real industry insights come out when we talk about the audiological solutions, and that's also what we at Finn today will try to shed light on. When designing hearing aids, and why is hearing aid technology not just generically available technology that you, for instance, could also use to build a headset?
It is because there are some very unique and important characteristics about the problem we try to solve that move way beyond what an ordinary headset would require. It is, of course, audibility, being able to hear things, things like comfort for the user, that it's not becoming disturbing, that things are not amplified too much while others are too weak. It is orientation, where are things coming from? Am I confused because all of a sudden it all sounds like it comes from the middle of my head? It's clarity that it doesn't mumble, that it's not meshed together when you do different technologies to try to compress things so you can actually hear them. Smaller details, nuances, the fine details of the sounds are the ones that end up making the difference in whether I could get meaning out of things.
And then, of course, side effects such as feedback, occlusion, a number of other things that we try to eliminate, just the potential negative side effects of wearing a hearing aid doesn't kill the core mission. And also today, a lot of focus on listening effort. One thing is to hear things. One thing is to recognize a word, but remembering what's being said, making meaning and context has to do with the brain load. It is ultimately the brain that makes sense of things and not something which is directly coming from the sound. And why is technology then difficult? It's a lot about very low distortion. Things have to be absolutely quiet when they're not doing anything, not disturbing the signal. A lot of other sounds that we today stream and download are not pretty.
If you take a careful look, we are working in a totally different class of fidelity. Any short signal delay, you will, as hearing impaired, still hear part of it coming naturally. So you cannot have a delay in the signal. You will hear an echo. Very little power in order to have a full-paced life and not too big instruments. High resolution, precision, non-linear amplification, and all these things together is what takes the effort in the industry to build a platform that moves the bar from what we did in last generation, and this is an ongoing recipe. There's very little change around that. What we do is to bring in new technologies in order to move the bar.
And what we have stepped into this time, and which is a really big progress, is utilization of a modern deep neural network, a learning, a pre-trained network that can help and support the device in knowing what things are. Until now, it sounds it's zeros and ones, and we have tried to build in some algorithms to say, is this speech? Is it non-speech? And so on. But as we have seen, self-driving cars is a fantastic example. When are things movable? When can they not move? What is it? Getting meaning out of things is different than just a picture of what things are.
Deep neural networks is a new door opener into a world of systems that can become intuitive, that actually knows what it is that is around you, that actually knows what is a speaker, what is a car, what is a fan, what is background noises. And why is that important? It is because that's part of making context. That's part of helping the hearing aid in not being too slow, not making mistakes, present a fuller and richer picture to the user without just making it all be blurred and nothing meaningful comes out of it. Some things are enlightened and made bigger, some are made smaller, and all this is coming from these deep neural networks' ability to much more precisely know what we are actually listening to. It's not trying to predict what you want to listen to.
It's not trying to make a choice for you, but take a step further in making sure your brain accesses the signals it needs to make the best possible meaning out of things and with the least possible effort. Something we started with Open, something we have seen great progress on, and where we now, through this unique technology, this first in the industry technology, take another very big and important step forward. With that, I'll pass on the word to you, Finn, Senior Vice President in R&D Hearing Aids. You have worked with us for a number of years. I'm sure you can share more light or bring more light to what it actually is we do.
I will do my best. Thank you. And good morning from me as well. So it's a super exciting day for me, for our team, and also me personally to be able to finally talk about what we've been working on for quite a while. So today, we are going to announce our completely new hearing aid platform called Polaris, that has been engineered from the ground up and basically will give us some opportunities for signal processing that has not been possible before and that we could only envision some years back. If you take a look at what we actually are doing here, then we are employing a flexible hearing aid chip architecture consisting of seven cores, something that we pioneered in Velox, in sort of same type of architecture, but now taken to the next level.
The way of having these multiple cores allows us for an optimized power consumption and also flexible signal processing, i.e., we can employ the cores as the hearing environment gets tougher and tougher to deal with. We can employ more cores in easier environments. We can employ less cores. And in total, we have seven processing cores in our architecture. The other thing that, besides being efficient in terms of power consumption, depending on the sound scenes, then the other thing it allows us to do is to actually allocate cores specifically to this deep neural network processing that Søren was talking about. And hence, we can actually have a mixed architecture on our platform, which is quite unique in the industry. Let's take a look at, you could say, some of the performance parameters of the platform.
So basically, on board of the platform, we have a pre-trained deep neural network that is capable of identifying sound scenes and employ the signal processing based on that. Compared to Velox S, the platform has 2x the processing power, i.e., 100% more. We have 8x the amount of memory, which enables us basically to have these huge amount of sound scenes that we can work with. And then we have employed a 28-nanometer chip technology, and the technology that we have selected here, we think, gives us the best trade-off of power consumption, but also in terms of performance and the ability of really employing high, you could say, signal processing level when needed.
The platform has also been updated in terms of security, which means that we have a larger amount of our, you could say, encryption happening in hardware than what we had on the previous platform. Compared to Velox S, we are now at a level of 154 million transistors, which is between 2.5x and 3x more than what we had on the previous platform. In terms of the sound processing itself on the neural network, then my colleague Thomas Behrens will go into further detail about how we do it, but we can say this much at this point in time that we have trained the network with 12 million individual sound scenes to make it capable of predicting, you could say, basically all the sound scenes that you, as a user, will meet in your daily life.
Based on that, we can then apply the processing. We scan the sound environment 500x per second, and then you could say, in real time, use the deep neural network to identify the sound scene. Additionally, when we sort of deal with the signaling, we have 64 signal processing channels, which is still the most in the industry. This is some key figures for the platform itself. If we then take a look at, you could say, the performance, so how good is Polaris compared to Velox and Velox S? When we benchmark them on the performance index, if we had Velox and Velox S around 100, then we have 1,600 on the performance on Polaris for the performance of the platform. This is some of the rough figures, you could say, on what we are bringing out today.
I can say that we in R&D are super proud of what we have achieved here. It has taken really long hours and long nights to get to this point. Additionally, it wouldn't do the job with only improvements in the signal processing. Also on the radio side, we have actually a completely new radio processor inside the hearing aid that is engineered from the ground up. This processor has, if you are a radio nerd, a sensitivity that is close to twice as good as what we have on the previous platform. We also have an ability to increase the output power significantly beyond what was possible on the previous platform and what is basically possible and known in the industry today, which allows for, you could say, much better link balance.
From the beginning, Polaris platform supports both MFi, as we know today, and the ASHA streaming, the audio streaming for hearing aids to support the Android devices. However, it is also fully prepared, as I'll show on the next slide, for the coming Bluetooth LE Audio standard and can be, by software update, capable of supporting the LE Audio that will be expected. The standard is being finalized in 2021, and after which we will start to see, you could say, all sorts of devices jumping onto that standard. And just to, you could say, reiterate what we had seen or have made communicate also before on why LE Audio is the future of connectivity, then basically, it's really engineered for wearables and for low-power devices.
We have the new audio codec called LC3, which basically is capable, and it has been verified through, you could say, sound and listening tests, is capable of providing a better sound quality than what the classic Bluetooth audio codec is capable of today. And that even with lower data rates, what is possible on classic Bluetooth. So with lower data rates, we also get much lower power consumption and, again, optimized for our applications. Then the other great thing about LE Audio is that it supports multi-stream, i.e., it's engineered for having several devices receiving, you could say, synchronized but independent audio streams when you're having a hearing aid indeed. And this is a significant advancement compared to what we know from classic Bluetooth. Then it goes without saying, buying these, it makes it optimized for hearing aids.
And then the final thing is that LE Audio also supports broadcasting, which means that we are going to see a multitude of applications where, you could say, devices with the LE Audio implemented is capable of hooking up to announcement systems or in theaters and similar. And that is built into the standard from the beginning. And again, let me emphasize that our platform is prepared for the standard and can be updated through software when the standard is available. And I think, actually, with those words, I will give the word to you, Thomas, to explain how we then employ the platform in terms of the audio.
Thank you very much. So I'm very proud to be able to explain to you today how we've created a new perspective in hearing care, using the newest knowledge and insights from neuroscience in how the brain gets into sound to make sure we can create new benefits for people with hearing loss. And as some of you may know, we've been on a 25-year journey at Oticon developing these new benefits. If we look at what we've achieved the last decade, we started out looking at conventional benefits such as speech understanding, but then took that to a new level, documenting objectively how we can reduce listening effort in the brain by means of the technologies in the Oticon Opn. And we even documented improved recall.
Then with the launch we did last year of the Oticon Opn S and those innovations that came along, we were able to document more speech cues to the brain, which enhanced one of the strongest abilities in the brain, selective attention, to become even better. And now we're ready to take the next step. We've been using cutting-edge auditory neuroscience technologies to document that the innovation in Oticon More can provide more and clearer sound to the brain. So the brain is even further empowered to make sense of sound with even less effort. And we have two technologies that are driving this: the MoreSound Intelligence that embeds the Deep Neural Network, and then the MoreSound Amplifier, which amplifies it all in very high resolution so it becomes available to the individual person with hearing loss.
But above all, what we are aiming for here is to get as close as possible to natural hearing. And we've created a little video to try to capture what experiences we want to create for people with hearing loss.
For too long, conventional hearing aids have limited people's sound experience. To change this, we need to give you access to more of the meaningful sounds around you, sounds that make you want to socialize, more of the beautiful moments that sounds bring to life, more of every little detail in perfect balance and clarity, more freedom to explore the world without worrying about running out of power, more possibilities to connect with your favorite devices and all the things you love listening to. Jump into life's amazing complexity with Oticon More. It's time to get more out of life.
This is what it's all about: help people get more out of life. Therefore, we are happy to introduce Oticon More, the world's first hearing aid giving the brain the full perspective. Because the insights we have from neuroscience telling us how the brain is working gives us the ability to use these new capabilities that Finn just talked about in a whole new way. The brain can get much more detail from the surroundings, and at the same time, we also make it easier for the brain to make sense of the sound. What has enabled us to do this has been a very, very rapid technology development. If you compare traditional hearing aids with the conventional technologies such as directionality, feedback management, noise reduction, and compression, those were mainly designed with the function of the ear in mind.
But then, over the past six years or so, we've fully reinvented all the core delivery that those four technologies represent. So with the Oticon Opn, we took a significant step forward by replacing directionality with the OpenS ound paradigm. With Oticon Opn S, we replaced feedback management with feedback prevention, thereby making all the dynamic sound environments of life available, so not only the static ones where people are sitting still at home. This brought a huge step forward in the experiences that people had with that hearing aid. And now we are ready to take the last step with Oticon More. And that completes the whole cycle of reinventing all the core technologies that hearing aids come with. So now we replaced noise reduction and compression with Deep Neural Networks and high-resolution amplification. And this is what enables us to deliver signal processing on the brain's terms.
So it becomes much more easy for people with hearing loss to make sense of sound. So the insights that I've been alluding to a couple of times, I've gathered in this slide, where you see how sound is traveling from the ears via the hearing system in the periphery, via the auditory nerve and the brain stem that creates a neural code, and then that signal is entering the brain. Until now, the hearing system in the brain was really a black box. But very recent neuroscience has unfolded that black box. So we now know that the brain is doing two very separate activities in the auditory system in the brain. First, it is orienting to form a full sound scene, and then that allows the brain to focus in on whatever is of most interest in that situation.
As that process is happening, that is increasing the separation so that the other sounds that may happen in the environment are not confusing or competing for attention. And that's very important because it's only what is in focus that gets into the language system in the brain that the brain can start to recognize. So by ensuring good orientation skills, good focus, it becomes as easy as possible for the brain to recognize sound and make sense of it and further strengthen those abilities as those white arrows here are indicating that once you've got control of something, you then tune in and sharpen your senses. The technologies that that's led to have been a fundamental new approach to sound processing. So instead of starting at the desk designing new algorithms, we completely revolutionized the process.
We started out recording sounds from real life using a very special microphone that can capture all the details from all directions. We used those sounds to create a well-curated library of 12 million sound scenes. We were training the Deep Neural Network that Finn was talking about before, using a very dedicated learning algorithm that is designed to take into account all the knowledge we have accumulated about how hearing aids should sound. The learning algorithm is teaching the Deep Neural Network what sound we want to get out of the hearing aid by looking at all the 12 million sound scenes and optimizing the detailed structure inside it, strengthening some connections, weakening others, and removing those connections that are superfluous.
The result of the training is then taken into the hearing aid in step three for doing the analysis and balancing of the sound scene so it becomes as clear and precise as possible before we then amplify it in the final step, which is also a new innovation that we've created that allows us to amplify sounds with up to 6x more precision than we were able to do with the Oticon Opn. A lot of details on a new innovation here. But again, we've prepared a video that explains what the purpose is and how we've trained the deep neural network.
When you or I are in an environment like this, we immediately recognize the sounds. We know which ones to focus on and which ones to ignore. But how do I get a hearing aid to do this? It would be easy if I just wanted to talk to Peter and didn't care about the rest. The hearing aids would just shut everything else down. But that would confuse me because I can see things happening but struggle to hear them. I want to be able to hear what's happening around me, like my friend shouting that my barbecue is ready, the guy playing the tuba on a unicycle, my daughter when she wants my attention, the bird in the sky, and everything else. It's a very complex task for a small hearing aid. Remember, sounds are constantly moving.
After years of research, we have found that the best way to solve this problem is to train a hearing aid with millions of real-life sound samples. This way, it can learn to recognize each type of sound and their relative importance.
To do this, we use something called the deep neural network. It's a method based on the way your brain works, which learns not by simple rules but through repeated experiences. And it looks something like this. A deep neural network consists of different layers with thousands of connections. Each layer feeds information in one direction from input to output. The neural network does not see the soundscape shown here. It only receives a complicated mixture of sounds. In the case of the sound scene, the first layer extracts simple sound elements and patterns from the input. Then the next layer builds these elements together to recognize and make sense of what is happening. Finally, based on all of this, the output layer chooses how to balance the sound scene. This is then compared to how it should ideally sound.
But here's the twist, and this is what's so groundbreaking about deep neural networks. At first, the neural network has no idea what it's listening to. It only hears a complicated mixture of sounds, so it just responds randomly. But every time it balances the sound scene the wrong way.
Information flows back through the network, saying the balancing is wrong. So anything that was supporting that decision, its connection gets a bit weaker.
and anything that was supporting a more ideal balancing, its connection strengthens. It does this over and over again until millions of sound scenes later. The neural network teaches itself to process any environment presented to it. Until now, hearing aids have been designed using simple man-made rules that categorize all sounds as either speech or noise. Based on how clear speech was in relation to noise, the hearing aid would then choose to interject all sounds in or shut all sounds out, except the one in front of it.
We took a new perspective and broke out of the lab to collect sounds from real-life situations with a highly specialized 360-degree microphone, and after training our deep neural network with all these sound scenes, we built it into our new hearing aid, Oticon More.
So when a person with hearing loss uses Oticon More to enter this sound scene, the hearing aid then utilizes its deep neural network to recognize the sound scene and make sure the sounds are clear and precisely balanced, all while taking the user's hearing loss into account. With access to the full sound scene, it's much easier for people with hearing loss to hear more and to enjoy life's amazing complexity. That's the deep neural network in Oticon More.
It's really the deep learning abilities that we've created that allows us to take the experience to the next level with Oticon More. But let's take a look at how we really use the deep neural network inside the four features of Oticon More. What you have here is our new feature, MoreSound Intelligence, which replaces the OpenSound Navigator that some of you may know from Oticon Opn and Oticon Opn S. That is what is ensuring the access to the full sound scene with clear contrast and balance. Thereby, it helps people with hearing loss with the number one complaint they still have, namely to be able to hear well in noisy and complex environments. First, the hearing aid is scanning the sound environment to figure out if that environment is considered easy or difficult for the individual person.
And that is then pre-programmed into the hearing aid based on the fitting that was done with our fitting software, Genie 2. Then, in the easy and difficult signal path, we treat the sounds differently. If it's easy, we firstly create spatial clarity by using a technology we call Virtual Outer Ear. So doing a very precise model of the pinna, the outer ear, to ensure that people can hear where sounds are coming from. And then once that spatial clarity has been established, we then use the deep neural networks in what we call Neural Clarity Processing to remove any remaining noise that may be disturbing to the user. If the hearing aid determines that we are in a difficult sound scene, we use different technologies.
First, we use the Spatial Balancer that localizes disturbing sounds and the locations of those disturbing sounds and then turns down the level of those. After having done that, the neural network is then reanalyzing the sound scene. And if there's further sounds that need balancing or turning down, then that is done by the neural network. And the combination of these two steps is what is creating the clarity, even in the difficult scenes. And it's in these scenes that we see the Deep Neural Network is really, really taking the experience to the next level. So in terms of noise reduction, it's 2 dB more precise, which provides a lot more information to the brain because we can suppress the noise more. After we've created the clarity, we then amplify the sounds by means of the MoreSound Amplifier.
And this is the new high-precision amplification that we use to make the full sound scene audible with all the important details. So on the left-hand side, you see how all compression algorithms are elevating the level of sounds. But it's doing so in a way that it's at the same time squeezing together the dynamics of sound so they are hard for the brain to access. On the right-hand side, you see the result of our new MoreSound Amplifier, which is also elevating the sounds, but it's doing it in a way that's preserving the dynamics and the important details in sound. They are clearer and more robust to what is happening around the user. And it's easier for the brain to separate the sounds from each other and focus in on them, as we were talking about earlier.
This is what allows the brain to really get much more out of sound than we were able to ever do before. The result you see here. When we look at how we create those sound scenes in better balance, then when we compare Opn S, that was really only open to speech. But with Oticon More, we are opening up to all meaningful sounds. So the hearing aid is all the time looking for all the sounds around it that it carries information. And the result is these heat maps or activation maps that you see here. So if we look at Oticon Opn S and the noise reduction on the left-hand side, then along the bottom or the X-axis, you have time evolving as the person is saying left, right in Danish. And upwards from the image, you then have the frequencies.
Low or deep sounds at the bottom and high-pitched sounds, which is frequent as consonants, towards the top. And what you see with Oticon Opn S is sort of squarish activation there. So the red is where it's suppressing sound, and the blue is what is emphasized, and white is neutral. So you see simple activation of the noise reduction in these small squarish circles as the speech is unfolding. If you move to the right, you see a much more colorful image, suggesting that we are now able to treat the sound much more individual. We can do more suppression if the sound does not carry meaning, and we can preserve and enhance sound much better if the sound carries information. So again, red is what we suppress. Blue is what we enhance. And you can then also see that the shape is now no longer squarish.
It's become much more organic, and that's really characteristic of deep learning, that it can capture information patterns that we were never able to capture before. This allows us to really capture the natural way that speech is produced by the vocal tract system, and therefore, we can see these natural patterns in the output, reflecting and preserving the important details in the sound that's important to the brain in making sense, so let's look at the research we've done with Oticon More to document how people are getting improved benefits and new benefits. Because what we've done is we've developed our own technique in collaboration with independent researchers that shows, by means of electroencephalography or EEG, how the brain is handling complex sound scenes, so this specific study, we included 31 people with deep and moderate hearing loss.
They were listening in a complex sound scene, noisy, with noises added and presented at 70 dB SPL, so representing a busy restaurant. Then we made it difficult by playing back the speech only slightly louder or 3 dB louder than the babble. What we did was we recorded the brain activity, sorry, from the scalp, using 64 electrodes, so really high-density recording, captured that EEG, did denoising to remove all the artifacts that could occur due to eye movement or muscle activity or interference from light. This then gave us the pure auditory activity in the brain. Now we are then able to compare the brain activity with the sounds from real life. Then we can see what the brain is focusing on, how it is enhancing certain things, and how it is suppressing other things.
When we're doing this, we can see that with Oticon More, we're delivering 30% more sound to the brain than we were with Oticon Opn. In this case, more sound actually means more information. Clarity, what people are looking for. This really enables people to hear all the details in the sound scene. It's important to notice that this improvement was created in the orientation stage of the brain, as I was talking about before, so where the brain is forming the full sound scene. This is what is allowing us to say that now we've created the first hearing aid to provide the brain with all the meaningful sounds around.
But not only have we really opened up to all sounds, at the same time, because we're playing on the brain's terms, we've provided even better speech understanding with less effort and actually also an even better ability to remember what you have heard. Specifically, we've increased speech understanding by 15% over Oticon Opn. So really, we are very, very proud of what we're delivering with Oticon More. This is a new perspective in hearing care because of the really, really strong innovation that we've provided with the deep neural network and how we've trained it on a library of 12 million sound scenes to make it very, very powerful, to create the clarity in the noisy environments with the onboard DNN, as we are showing with the heat map that I was talking about before.
And that is then providing these dual benefits that we've documented with well-thought-through and well-executed clinical evidence from auditory neuroscience research, opening up the sound scenes so the brain can hear much more, much more detail, and at the same time, make it easier for the brain to make sense of sound, so lowering listening effort so people can understand more and remember more. That was very quickly the essence of the audiology in Oticon More.
Thank you, Thomas. Thank you, Finn. Very, very impressive. I get all fired up here. So let me try to round this off before we go to the Q&A. It is a multi-brand launch. We highlighted a lot of the things in the Oticon brand, but also our three other brands bring out very exciting technology based on the same core platforms, the same core capabilities as you have just heard about. And why do we have a multi-brand strategy? We are dealing business to business and have many different channels out there. And to put it short, to achieve the highest possible market share, handling potential collateral damage issues, channel conflicts, and also make sure that there is something for every need because not everybody needs the same.
Also in business support, then to cater for independent government systems, buying group networks, large chains, dedicated chains, chains where hearing aid business is part of their business, we see great success in our multi-brand strategy. I would even say the markets where we have the highest share is typically where we have succeeded best in bringing all brands into play. So still an ongoing effort from our side. And we, as you know, years back, added Philips that today are building a stronger and stronger position around the world as another leading brand in the group. But in Oticon More, we launched Oticon More. It comes in the industry high runner today, a miniRITE, a small, discreet, nice-looking RITE system. It's, of course, rechargeable, as Finn said, full connectivity.
It comes in three other price points, which gives a lot of pricing flexibility for dispensers and still allowing to offer the technology. With today's RITE systems, you can change output levels and therefore cater from both mild to severe hearing losses. We have developed a number of new dome techniques that further make the instant fit opportunity available, whether you need a more closed or more open without getting, again, as I talked about before, side effects such as occlusion, feedback, etc., and very high wearing comfort. And then there's all the products catering for both the Apple protocol and the ASHA protocol, and at the same time, being prepared for the very soon coming Bluetooth Low Energy protocol that the industry has been deeply engaged in for long. And I'm sure this is a future model for connectivity in hearing technology.
We also introduced Philips HearLink with a different way of applying some of these new smart technologies. It does apply to more traditional signal processing features such as directionality, noise reduction, etc., but does it in a learning way where it becomes better and better to do what's right for the user. So a unique setup under the theme of SoundMap is something that was started in the first version of Philips, but also here significant new horsepower, new technologies brought into play to further enhance the end-user experience and, again, through connectivity. And the same goes for the two other brands, Sonic and Bernafon, where we with Bernafon Alpha and Sonic Radiant also will add very new or strong extensions to their current product portfolio. And then talking a little bit about how we roll it out, today is obviously the kickoff.
We have, as some of you have noted before, today our first larger customer event in Germany. A number of other European markets soon follow, and then a number of overseas markets and the rest of the markets follow just after New Year. We will start shipping mid-December, but already now start to build for activation. So we are ready when things come out. It is obviously late in this financial year, and we don't expect any material impact to things this year. This is the plan we have had. Things come as expected, but of course, it is a key growth driver for 2021, 2022. A platform like this is something we have to build on in the next few years. We will, of course, see a continuous expansion of the portfolio based on this technology to more styles, to other price points over time, etc.
This is the group's very strong new technology platform to drive growth in the years to come. Key takeaways from today's event, and I really think it was fantastic to listen to both of you, but in particular, this heat map that you showed, Thomas. I really think that is the best illustration of this intuition of this neural network that can now better understand what it is that's actually going on. It is not a simple algorithm, yes, no, to speech or noise. It is a very, very detailed, pre-trained intuitive system that makes sure we can apply much more details, much more sophistication, and again, forever leave the paradigm of trying to zoom in on a given speaker and have relatively slow algorithms in case there is a competing speaker. It could be otherwise. These systems are extremely fast.
They are well-trained and will very quickly make sure the instrument adapts to what is a very dynamic sound environment. So very powerful. Future-proven connectivity. Today servicing Apple phones as well as Android phones are built for the future. Bluetooth Low Energy audio protocol, which is just around the corner and will be a new industry standard. And then new flagship products as well in Philips, Bernafon, and Sonic. So a very strong portfolio that we today kick off the launch of. With that, I will turn to questions and answers. We have set aside just around 45 minutes. We'll go at least that if needed. But please let me have questions. I will try to sort a little bit where we get the best answer depending on the questions. Please.
Thank you. If you wish to ask a question, please dial 01 on your telephone keypad now to join the queue. Once your name has been announced, you can ask your question. If you find it's answered before it's your turn to speak, you can dial 02 to cancel. Our first question comes from the line of Michael Jungling of Morgan Stanley. Please go ahead when the line is open.
Great. Thank you and good morning. Can you hear me?
Yes. Loud and clear, Michael.
Lovely. So I have three questions. Firstly, on pricing, I understand from talking to some of your colleagues this morning that I think pricing is going to be low to mid-single-digit increase. I'm quite surprised it's that low because we've had typically four years since you've launched Opn and inflation since then. You're offering us or the customers 16 x more performance, or at least that's how I interpret one of your slides. And neural networks, why do you not have more confidence in your technology and raise the price by 10% or 15%? Question number two is on the multi-brand launch. Can you talk about how you will differentiate this technology between Oticon, Philips, Bernafon, and Sonic?
And thirdly, on launch costs, given that we are in a different world to where you were when you launched Opn, can you give us some guidance on the launch costs relative to Opn to sales? Are we talking about a similar launch cost, just in different ways, or are we going to expect lower total launch costs to sales versus Opn? Thank you.
Thank you, Michael. On pricing, when we talk about these expectations for pricing, you very much have to look at business list pricing. We are in a business-to-business relationship. It is negotiations. It's not a consumer pricing. You might see it's slightly different there. It is our expectation that we in today's highly competitive market, despite all the fantastic wonders of these new products, will manage to lift the price low to mid-single digits. But it is a competitive industry, and there are other new products out. We will go into that fight. We have a lot of confidence in what we come out with, but I would rather see significant market share gains than trying to skim on pricing and end up with a smaller penetration in the market. So that's the strategy and choice we have made. On multi-brand, you heard a lot about Oticon today.
We could give a similar talk on Philips or Bernafon and Sonic, and you will find that the core speed of the processor, the new wireless technologies, and also elements of the Deep Neural Network and the AI associated is applied in different ways. But some of the core philosophies are different. BrainHearing is a unique Oticon research that Oticon spends much more on trying to dig and dive into the nature of BrainHearing. Some agree with us it's the right way to do. Others are skeptical, and therefore we also do things differently across the brand. Launch costs in COVID, it is virtual. For material reasons, we do not spend the same. I would actually like to be able to do it in a non-COVID world.
I think it is stronger to be together, but we will do our utmost in getting as much out of it in a virtual world. And it does also cost money. There is less traveling. There is less meeting in hotels and others. So we try to move some of it to the digital side, try to do more in the digital space and arena of activation. But all in all, I would assume that this is lower than it would have been had we not had COVID.
Great. Great. A brief follow-up then on product differentiation. Will a consumer notice the difference between the performance of an Oticon device and a Philips and Bernafon device? So when you fit it, are you expecting those consumers to experience the same quality of sound, or is it very much it is just Oticon is premium and Bernafon is less premium? I'd love to understand what that is.
I think you would expect a very high end-user satisfaction across all brands. That's definitely what you will see because a lot of it also comes from the general elements of the new signal processing. And yes, we are all not the same. There are users that will benefit tremendously from this, and there might also be others that benefit from the others. So we don't have to compare the two, but I'm sure you will find all the users of any of these products will come out with very high user satisfaction.
Thank you.
Our next question comes from the line of Maja Pataki at Kepler Cheuvreux . Go ahead. Your line is open.
Yes, hi. Good afternoon. I have also a couple of questions. The first one with regards to your launch strategy. I understand that you are going to start markets or launch products in the U.S. come the new year. Now, given the importance of the market, I was wondering whether you can help us understand why you're not going straight now into the U.S. market. And the second question relates to what Michael has been asking. So there's clearly a lot of new technology going into all of your new products and a high degree of details. But I'm trying to see how do you differentiate your product towards your customers? I mean, is there a more simple pitch line that could help us understand the differentiation of the new families coming out? That would be helpful. Thank you.
And let me take you as it is actually so that typically there is a delay from the launch here in Europe until we reach overseas markets. It's simply a shorter process getting products into Germany, France, Holland, etc. And as Christmas, as New Year's coming, and we don't start shipping until mid-December, it is practically impossible to start anything material up where we don't just create a kickoff that doesn't materialize and we kind of have to start over again. It doesn't mean that there's not a few good US customers that might try this before Christmas, but real material impact, big launch events, virtual meetings will start out first thing in the new year. Yes, there is a lot of technology, and yes, each of them have a unique pitch, but we don't sit and sell all four against one another.
We are much more busy selling all the products against competition, and each of our four brands have very unique selling points. They have very unique selling against their typical competitor, which is also a little bit different, so therefore, there's not kind of a, let's say, really digest to Oticon is this, Philips is this, Bernafon is this, Sonic is this. They each have their unique life, and if you dive into the details, you'll be able to see noticeable differences across the different brands.
Thank you.
Thank you. Our next question comes from the line of Oliver Metzger with Commerzbank. Please go ahead. Your line is open .
Oh, hi. Thanks a lot for taking my questions. I have three. The first one is also regarding the timing of the launch. Initially, you had invited to the market day last September, which was potentially some indication of the launch. Now you launch basically in the middle of a typical VA window. So have you reacted to the current market dynamics related to the overall corona pandemic, or what have been your rationales for launching it right now? That's the first question. The second one is on clarification. So as you said, MoreSound Intelligence replaces the OpenSound Navigator. If I look on the functionalities, the key element, the scanning of the environment is unchanged.
So is the difference between both approaches only the handling of sound afterwards? Is correct, and the results of, let's say, as a better balanced, more precise, or even more meaningful sound afterwards come as a result of a better pre-filtered sound? So could you clarify that, please? My last question is just a very quick one. So I haven't found any comparison regarding your battery lifetime compared to Opn S, so potentially you can just provide also this information, please.
Yeah. I will do one and three quickly and start with the batteries. Very much the same battery expectations as in Opn S. We have mastered this through using the new technology to get it down to the same level despite the immense higher performance. Timing of launch, you can always discuss what is the optimal timing. We are ready now. We launch now. We go full speed. There's no link between the cancellation of the capital market day that was COVID that prevented a meaningful capital market day, so we chose to postpone. We are ready now, and we will launch at the time as of this week.
Sorry. Thomas, you're much better at the details between OR and Opn. Sorry.
Yeah. So the question on the MoreSound Intelligence and how new that is compared to the OpenSound Navigator, it's actually all new. Every stage of the processing algorithm has been updated. All the detectors have been upgraded. We've even put in some new detectors there to be much more precise. And this means we can now handle 5 dB more noise and still be as accurate as we were with Opn S. So we are much more robust to noise in the environment and can still pick out those important details in sound that we need to make clear and amplify. Then secondly, if you look at all the other stages in the algorithm, those have also been upgraded. So we now have 50% more resolution across the entire engine when we are looking into every frequency to find out where the important information is.
So really, everything has been upgraded. And then at the same time, of course, we've been optimizing the technology, so we still have that very high-end sound quality that we want to be known for.
I think another way also to put it is when you look at Opn now, all of a sudden, it seems awfully basic, but it was a major step forward, but it's really powerful when you look into the analysis of the neural network's ability to, again, as you said before, put details between things, but it is, of course, down the same alley, allowing people to make up their own mind of what they want to listen to. No processor can predict that. The speed by which you do it, by human nature, is just so powerful, and we have no doubt that this is the right path going forward, and it is a major step down that path.
I think it's also quite unique that what we've done here is we've integrated all these algorithms into one unit. So they work together. This means that it's not like they can make separate decisions, which is often happening in the traditional directionality and noise reduction systems. It can be a little uncoordinated at times, which has a cost in terms of sound quality. Integrating it all into one well-coordinated machine, we can then ensure that we deliver even better sound quality.
Thank you, Thomas. Next one.
Okay. Thank you.
The next question comes from Christian Ryom of Nordea Markets. Please go ahead.
Hi. Good afternoon, and thank you for taking my questions. I have three, please. The first is to you, sir, and whether you can help us, help remind us how much of the market you typically capture with the three uppermost price points. I appreciate that there is some fluidity in this when you have a new product launch. Just sort of a rough guide would be helpful. And then my second question is regarding the decision only to launch this in a rechargeable solution initially. If you can dwell a little bit into that, whether there are some production efficiency concerns behind that. And then my third question is whether we should expect this to be a longer rollout than what we usually see, maybe related to COVID, maybe related to the newness of the technology. A few comments on that as well. Thank you.
Thank you very much, Christian. There's no doubt that upper three price points has a very high value share in the world. I would say above or around 70% of world market value lies with these types of products. It comes a little stronger through in the independent sector with the commercial chains where some of the government systems work on simpler technology, lower price, etc. So it's a very big part of the value market we address here. The question of rechargeable, this is today's high-volume runner. In particular, in these price points, in particular in the private commercial sector, we continue to see a very strong pickup.
It has been very important for us to get out with today's typical recommendation, a rechargeable, very discreet, miniRITE, many output levels, many ways of making the earpieces, whether you use an instant fit type or BTE mold produced, and that's what we have focused on. There will come other styles. We will complete the portfolio with more in the coming launches, but this is the center point in today's product offerings when you talk about upper price points. Therefore, it is important to start there. Longer, we don't know. We have not tried to make this big a launch before during COVID. I sense it takes a little bit longer. It is activation after the first virtual launches that really will tell. But when things are new and novel as this is, I'm sure we'll get a lot of attention.
I'm pretty sure we'll be able to do a strong follow-up. Whether it is exactly as we used to, I cannot tell. We have not been there yet.
Thank you.
Thank you. And our next question comes from the line of Niels Leth at JPMorgan. Please go ahead.
I'm sorry it is Carnegie. Please go ahead.
Thank you, so my first question would be about the rechargeable version. I think it's fair to say that you're underrepresented in the rechargeable segment with your two latest products, so how will this introduction in the rechargeable category only affect your gross margin for 2021? That would be my first question, and then secondly, to what extent would this neural network be protected from competition?
Yeah. More rechargeable has a dilution effect compared if you sell everything else equal. But since we launched Opn S rechargeable, we have seen a very significant reduction of the cost of the system, both on the hearing aid and on the charger side. So we will see a growing share. I have no doubt about that. But we still have a means to lower the price, and exactly how that plays out for 2021, it's too early to guide for. But there is a dilution effect coming from rechargeability, no doubt. And on patents, I'm sure we have done a lot. Maybe you have a little more to add, Finn?
I will add a little bit. Yes, of course, we have done what we could. However, of course, everybody probably understands that deep neural network is not something as such you can patent. But the important thing when you work with deep neural networks is about the data. The data are super hard to copy. You don't know where we got the data from. The further thing is the learning algorithms we employ to actually train the network, and then finally optimizing the network.
Because even though we have a platform with significant computing power, then if you compare to what you can do with cloud computing and these kind of things, when you go and see things on YouTube, for example, what is possible, then you could say the key is how you actually optimize the network to give the performance that we have gotten in a hearing aid, which after all has limited resources. So that's how you could say we protect the IP. But we, of course, pursue any opportunity to make patent, but it's typically to offer in software design to actually really describe the unique mechanism. So that's just the name of the game. It is the speed. It is the secrecy of training it that's much more creative uniqueness. I think others will be able to do neural networks all the time.
I would assume this is new and unique way of doing things.
Thank you. Our next question comes from the line of Tom Jones at Berenberg. Please go ahead. Your line is open.
Hello. I had a couple of questions, one on the features and one on the launch strategy. Just on the Android compatibility, could you just confirm that it's only compatible with phones that are running on the ASHA protocol, Android phones on the ASHA protocol, and there's no compatibility with older Android handsets? And if so, I mean, my understanding is that's still a relatively small part of the Android market. So how quickly can we expect this to actually become a feature that's meaningful in terms of Android users who are definitely going to have to upgrade their phones? The second question is just confirmation on the dual radio. I think I know the answer. Could you just confirm whether this is indeed a dual radio or you've moved over to 2.4 for ear-to-ear connectivity?
And then the third question is just a conceptual one, really, around the timing of this launch. I know you've kind of really seen the launch, and that's why you've done it. But my question really is around sort of new product fatigue. We've seen major product launches from four or five different hearing aid manufacturers this year. To what extent do you think new customers are going to just be a bit bored with new products and therefore a bit more reluctant to switch over to a different manufacturer than they have been in the past? I mean, if you switch once and then switch again already this year, might there be a little bit of resistance to switching a third time for new customers?
I can understand how you can easily upsell your existing customers on this platform, but in terms of capturing new customers for competitors, just the fact that so many people have launched so many products already this year makes it just that bit more difficult, do you think, or do you think that's not really a factor?
I'll come back to that. I would like you, Finn, if you could take the first two on the Android ASHA.
Yeah.
Finn, is that you? The dual radio?
Yeah. And dual radio. So about the ASHA, it supports phones that run the ASHA protocol. And that's what we do. And we see an increasing amount of phones coming out there on the Android community supporting ASHA. So we expect that to increase significantly also during next year. And then, of course, you could say eventually we will get the new Bluetooth LE Audio standard, which will be sort of basically in all consumer devices. And it's, in our opinion, just around the corner. Then about dual radio, yes, I'm sorry, I didn't mention that specifically. Yes, we still do have the dual radio in our hearing aid. We still believe that using near-field magnetic induction for the ear-to-ear communication is the best way to preserve power and still get enough information across. So our philosophy is still on the dual radio, on the TwinLink.
Sure. And just maybe clarify, what percentage of Android phones currently in use do you think are currently running the ASHA protocol?
I think that's an impossible question to answer. I simply don't think there's anybody that has a full overview. But the newer phones from the bigger brands, Samsung, etc., they have it in. And that's where you have to look. I think there's limitations to the physics of upgradability when you get too old chipsets inside. So it will be the later models, two years or something like that, two or three years release that have the right technology inside and from the major players. I think that's the right way to put the limitations. There's a whitelist, but there's no blacklist.
And then the launch fatigue.
I can take your last one, Tom. The nature of this industry is that customers want to show good, or our customers want to show good in the face of customers and make sure you leave their store happy and that their fitters work with something that really moves the mark. So if there's sometimes fatigue, that's, I think, is a period where things tend to look the same. I think with what we come out today, we will be able to open doors that some might have found a lot. So I'm very comfortable in the qualities of the products, ability to engage on a discussion, whether there should be an opportunity to work with one of the group's brands.
Okay. That's fair enough. I'll be back in the queue . Thank you.
Thank you. Our next question is from the line of Martin Parkhøi at Danske Bank. Please go ahead. Your line is open.
Yes. Martin Parkhøi, Danske Bank. A couple of questions coming back to the pre-launch. Just I guess my question to Søren, why do you pre-launch one month before U.S.? Because normally it's not that long. How much do you think you will lose in sales in December, which is normally a weak month in U.S.? And I guess that is included in guidance. Then secondly, I'm curious about the 12 million sounds that you have recorded. First of all, will that be updated to the software later so we will get added more to the 12 million? How have you recorded it? For example, if it takes one minute per sound, then it will take you 22 years to record these 12 million sounds. How have that been done? And then finally, I think that must be back to Søren again.
You expect to deliver these technologies also to the Philips brand, and we know Philips is available in Costco. And you also know that there is some kind of price competition in the private segment in Costco right now with the U.S. dumping prices of the Rexton brand. Will you just take the hit on the volume on the Philips brand, or you will be able to be competing on price as well?
Thank you, Martin. Maybe start with you, Finn, then I can take the other two in a row.
Yeah, sure.
Sorry, Thom. Maybe a little bit of sound.
Okay, so the 12 million sound scenes or sound environments, you're right. We haven't recorded 12 million unique sound scenes. We've actually been recording thousands of sound scenes, and then our engineers are curating the training library, have cut that into tens of thousands of unique sound scenes, including dynamics, but shorter ones, and then the neural networks are then trained by combining all these environments we now have in many different ways, so it sees it many times, just as if you would need to learn and to speak a new language, you'd need to see or learn a wide variety of the items of the language, but then be exposed to those items repeatedly in order to be able to learn the details of those words and how to pronounce it and the detailed meaning, and it's the same we do here.
It's a mix of unique material and repeated exposure to that material.
Thank you, Thomas. Back to launch and U.S. Again, as I said before, we are only two weeks away from actually being able to start shipping products. And that is actually a close call to then have kept things a secret until now. Yes, U.S. has another three or four weeks before they come out. That's not that different from what we have seen before. We also have holidays coming in in the middle of the period. So we will have also gaps and a warm-up going on in U.S. And we will do what we find right to keep the business going and then take a very strong start beginning of the year. The U.S. market, as I said, is not fully recovered. So I think we will get through the next five, six weeks until we can also ship something in U.S. fine.
Then a particular launch like a Philips into a particular customer. I'll not stand here and disclose our tactics on that account.
Can I just come with a follow-up on a little bit related back to the question that Niels had previously with the gross margin going to 21%. If we look at the ASP, then it's of course also clear that the ASP in 2020 has benefited from channels like NHS being behind. So we have had an actual positive ASP from that. So should we actually not expect an ASP increase next year despite you launching a premium launch because of differences in channel mix?
And I think, Martin, this is one of the areas where our crystal ball is equally foggy. What about pent-up demand in U.S.? If that is being released, it's a very significant pent-up demand. Theoretically, it should be in U.S. Everything else equal to those channels where these products address, we will see increasing ASP coming from that. The speed by which NHS and export are coming back is also very unpredictable. So it is uncertain times. I think it's much more important to look at our ability to gain share in the markets that these products actually address. And then the ASP will be what it is. If it's lower, then the volume will be very high. If it's going up, then there might be some channels of more lower price that are still not recovered.
If pent-up demand comes out in U.S., it will, generally speaking, drive ASP up, as we have seen some of the pent-up demand coming in in Europe and Asia, Australia, and New Zealand already. So ASP is probably the most difficult to guide on, and I will not do that today.
Thank you.
Our next question is a follow-up from Michael Jungling at Morgan Stanley. Please go ahead. You're on the side.
Yeah. Thank you for taking another round of questions. The first question I have is on the chips. Is the 28-nanometer chip off the shelf, or is it proprietary? And if it is proprietary, will you engage in platform sharing? Question number two is on the gross margin. If we exclude the impact of the rechargeability, does the price increase offset the cost or perhaps the incremental cost that comes with the sort of more technology going forward, or if you like, 2021? And then the last question I have is on BLE compatibility. You mentioned that you only will require a software upgrade once the standard is finalized. How do you know that?
How can you be comfortable of that you don't need a hardware change, and then you have to go out and perhaps take old products back because you promised them BLE compatibility and then have to issue new products? If you could just clarify how comfortable and how certain you can be about it only being a software upgrade. Thank you.
I'll let you start with the last.
The software upgrade, yes, of course. Nobody can ever sort of predict the future, but I would say this much, that ourselves and also our colleagues from the other hearing aid companies have been deeply engaged in developing this standard ever since we started many years ago. We are getting to a point where the standard is getting into the finalizing rounds. We truly believe that what we have on the platform, hardware-wise, will be able to cope with what is coming out there. We do not foresee, you could say, larger swings in terms of the standardization as we look into 2021. We are quite confident that what we say will hold true.
If I'm not wrong, Finn, as part of the design, we also decided to kind of move the layer between what is fixed and hardwired and what is programmable because this is not designed yesterday. It's quite a time ago since its architecture was made. In order to ensure that we could make a software upgradability, even if some of the things were fine-tuned and adjusted as long as we knew the core technologies, which on the other side cannot change in the very last moment, then I think we are very well prepared for ensuring that this is software upgradability.
That's a good point, Søren. We have a much larger amount of, you could say, the functionality built into software than what we had previously. And as you probably also saw, a lot more memory on the chips than what we used to have.
So then, Michael, you might have to help me on the other two questions because it was a little unclear exactly the question. But all our chipsets are developed by ourselves. It's radio, it's the DSP, it's some of the analogs. So things are proprietary and something we have developed and we use. So I'm not sure if that answered your question.
That answers the question, and then the last question was on the gross margin. If we take away the incremental margin decline or so from rechargeability, I'm just curious whether the price increase that you're putting through for the new range of technology can actually compensate on the incremental cost that you may see on the hardware side, so for instance, incorporating a faster chipset and some additional hardware features with respect to BLE and so forth.
That's a yes.
Thank you. Our next question comes from the line of David Adlington from JP Morgan. Please go ahead. Your line is open.
Morning, guys. Thanks for the question. Just on slide 28, the study that you presented, I just wondered if there were any P-values associated with those charts you could give us. And then I suppose more broadly, for a new launch like this, you'd normally provide us some data around patient preference versus some other hearing aids usually. This data that you presented today is quite esoteric in the EEG strength, signal-to-noise reduction of around about one decibel by looks of it. I just wondered how clinically meaningful or meaningful that was to patients. Any data around that would be great. Thank you.
David, I simply have to ask for your first question once more. I couldn't hear exactly your question.
Yeah. Just on the P-values, those charts on slide 28, do you think have any sort of clinical significance or statistical significance? P-values for those charts?
Thomas, to answer those?
Yeah. So the P-values in those studies, I mean, they were statistically very significant. So I don't remember the precise P-values, but for instance, in the EEG experiment, I think it was 0.0001. So very, very strong significant result. On the speech understanding, it's typically a little less, but still a significant result, lower than 0.05. I don't remember precisely. And then you are asking about the patient preference. That's also been very strong. So we are well above 80% preference for this new product compared to the former one. So again, very strong. And according to those principles we have, we won't release before we've ensured significant user satisfaction and preference relative to the former product.
Okay. Thank you.
Thank you, and we have one further question in queue so far. It's from the line of Yannick [de Kerchove] at [KPMG]. Please go ahead and keep the line open.
Hi. Thanks for taking the question. I think it goes to Thomas. I think in your opening remarks, I think it was on slide 19, you stated that now you've basically reinvented all the core deliverables of a hearing aid by going from the directionality to the feedback prevention and now this neural deep learning. So what is left? I know it's a little peculiar to ask about your next innovation when you're just launching a whole host of new hearing aids, but where is then the future? And it'll also relating over to that question early on, the fatigue of new products. So how should we think of the future when we look ahead two, three, four years, particularly when there are parts of things that will be commoditized, as you alluded to, in terms of connectivity and all that? So a few thoughts on that would be great. Thanks.
Yeah. I can share some thoughts on that. I mean, we will never be done reinventing that core delivery. So we can come up with new and improved ways of delivering on those four core areas. In addition to that, we are so fortunate at Demant to be part of an organization that also has significant diagnostic capability. So developing new diagnostic, getting even better at prescribing our technologies based on new diagnostic insight into the hearing loss and the challenges of the individual person, we strongly believe that that will help us deliver new benefits going forward. And then we have a number of other initiatives in the pipeline that will help us really drive these user benefits to the next level. So we are quite confident on that part.
May I add, Yannick? It has always been so that once you kind of open a door to a fundamentally new technology, you will be able to expand it. You will be able to build more into it. This is a first Deep Neural Network. And there is, of course, limitations to how big it can be and not use all the power and all the memory in the first go. It is very powerful in the details you saw, but I'm sure the whole world is going to learn to do more of these intuitive systems that mimic human capabilities. But it's still only a mimicking, and not until we understand the brain better, we can do the design here better. So there is still a lot to uncover. And then, of course, typically, hearing aids are also used by seniors. There are some that have some cognitive decline.
Others have not, and a further individualization of what is a neural network here to the individual. I see great perspectives, so we will be able to do super hearing and help people to get even more out of whatever hearing they have, but also their cognitive status. I'm sure.
Okay. Thanks.
Thank you. We've had a couple more questions come through. The next is from the line of Carsten Lønborg of SEB. Please go ahead to your line, certainly.
Yeah. Thank you very much. Actually, more or less, only one question left here. I was just thinking about the rechargeable sticks, if you can say so. How do you stack up in terms of comparing to competitors? How long time will a charge last, etc.? And also, as far as I understand, you will not have a transportable battery case to begin with. What is your plan in terms of launch timing for such a solution as well? Thank you.
Thank you, Carsten. I think on rechargeables, there's a certain threshold when you're over that. It actually doesn't matter unless you could give two, three, four days. That would, of course, be another breakthrough so you didn't have to charge every day. But all products in the industry, I believe, have a good long day, can be relatively fast charged if needed to extend, and then for sure, even at a short night's sleep, will be fully charged. So I think we are, again, in the same ballgame. You have to look at power consumption. We are fundamentally all working out of the same batteries. Back to why these technologies will, over time, become more commodity. And then portable charger versus a more stationary, ease of use is key. We get a lot of good feedback for the charger we have.
We, of course, also have eyes open for some that seek portability. It is something that we have on the radar and for sure have a close look at. But currently, the product is launched to the market with what we see as the normal use, a close-to-your-night-table charger, which is robust, doesn't fall down, etc., and very easy to use. And with that, we have time for one last question.
Thank you. There's one more person in the queue, and that's Michael Jungling with Morgan Stanley. Please go ahead to your line, if you can.
Great. Thank you. I promised it's the last question. It's the only chance. But I wanted to send in this question to ask about pricing before. And you specifically mentioned your interest in gaining market share and perhaps a little bit less sort of on the pricing side. I'm hoping you can get a bit more specific what you mean. Have you got a market share gain number in mind for 2021? And then secondly, is this market share gain a, for lack of a better word, obsession? Is that something that perhaps is going to cause your margins again to be under pressure next year? I'm not asking about the specific margin next year, but the directionality. Are you willing to sacrifice margins to achieve this unit market share gain number?
No, Michael, we think our products have very good qualities and should be sold broadly. So it is more what is the pricing in the market that you can get if you also want to close the deal. It is business to business. And in that balance, it is also important when you have something that's truly unique as we have here, then you use the window to get out there. We'll not use aggressive discounting if that was the other end of the scale. We'll take a natural and minor price increase as we typically do, and then get out there and really get these new wonderful products in the hands of the customers and out to end users to truly benefit from it. But it is unique. We can compare it to competition.
We find it very unique, and we will use this opportunity to drive a momentum in the business. And if we do that successfully, selling premium, getting premium share will always be positive for margin, everything else equal in this industry.
Okay. I'm asked only because if I listen to one of your competitors, the narrative is never one of, "Oh, margins must come down." It's always a message of, "As we grow, we can also deliver sort of profitable growth and with mid-term margin conditions which are attractive." And I sort of feel that's not quite clear in what you've mentioned today. And the risk is that we end up again with a margin decline and kind of destroy the equity story.
Unfortunately, unfortunately, Michael, it is always volume, success, market share gains that drive margin up in the industry. There is great evidence from that from our competitors and ourselves. We saw it when we were really having a unique position with Opn in the beginning and saw things come out. So that is exactly the same as I think you have heard from some of my competitors.
Okay. Thank you. Very clear.
With that, I say thank you to all the questions. Thank you for listening in today. Thank you for Thomas and Finn in assisting me in this fantastic launch. We'll, of course, get back and be available for more clarification. And then, of course, look forward to be able to, once we've been in the market for a while, to report on how these new products do in the field. Thank you very much for joining us today. Thank you very much for taking your time, and please engage with the IR team for further information. Thank you very much.