I'm humbled by the turnout that we have today. Very surprised and awesome to see everyone here. There is one logistical item, which is that on the translation devices that we have outside, if you want Chinese, okay, that would be channel one. If you need English, that would be channel three. With the logistics out of the way, we're ready to get started. To begin, I wanted to play a video for you that truly showcases how Yageo is enabling the AI revolution.
In the evolving landscape of artificial intelligence, Yageo Group plays a crucial supporting role. Our products are literally built into all aspects of AI. Our components are vital in enhancing the power density and operational efficiency of the servers and data centers that form the backbone of the AI infrastructure, and within the communication systems that make it possible. Yageo's technology underpins AI applications in multiple fields, from the smartphones and laptops that connect us to the advanced medical devices that save lives. In transportation, we contribute to smarter, safer vehicles, and in the realm of Industry 4.0, we are pivotal in enabling more intelligent, efficient manufacturing processes. Our products are key in developing smart grid technologies, ensuring more reliable and sustainable energy management on a global scale. Our commitment extends beyond just technology. It's about enabling a smarter, more connected, and sustainable future with every innovation. Yageo Group: Built into Tomorrow.
I hope that paints a clear picture on how Yageo is built into everything AI. Now, without further ado, it is my honor to introduce our keynote speaker, our Founder, our Chairman, Mr. Pierre Chen.
I'd like to thank you all the old friend, new friend, media, our investor, our team, and nice to see you afternoon here in Taipei. Let me introduce some of our role and our commitment in the AI industry.
[Non-English content] Tom, thank you。
Thank you, Mr. Chen. It's truly inspiring what you build. We're certainly not the same Yageo that we used to be. I'm excited that we will continue to grow and transform not just Yageo, but also the lives of everyone that's gonna experience our product and services. Next, it's my pleasure to introduce Claudio Lollini, our Senior Executive Vice President of Sales and Marketing, to expand further upon our journey over the last five years.
Thank you, Tom. All right. Hello, everyone. It's exciting to be here. I'm gonna walk you through in the next few minutes the journey of Yageo Group, and it's a good journey. You will see why we are spending time to tell our story to the investors, the community, the analysts, our customers. I will start by pointing out to our new brand, our new logo that you see up here.
When I look at this new logo that we recently introduced, I see that nice connection between the history of Yageo and what Yageo is gonna be in the future. You see that modern splash of orange and the emphasis on the group in the end. Why is it important to talk about Yageo Group? Because Yageo go to market with multiple brands.
The three principal brands that we are most well-known for are Yageo, KEMET, and Pulse. Most recently, we also accomplished two significant acquisition in the sensor space, so I wanna bring them up. Yageo Nexensos from Heraeus, Germany, and Telemecanique Sensors from Schneider Electric in France.
An exciting new step for Yageo into a high growth rate space of sensor. Now, Pierre mentioned in his speech how the company transformed from around about the year 2018 to today. To help you get a sense of that, I'm gonna touch on three points of transformation. The first point will be product portfolio, then we are gonna look at geographies, and the last one is gonna be segments.
Now, because 2018 was such a particular year, I think many of you here remember, we decided to take the first six months of 2017 compared to the first six months of 2024 to make a more fair and even analysis. Back in 2017, Yageo was around about a billion-dollar company, U.S. dollar. In a six-month timeframe, that would be about half a billion. Capacitors and resistors.
Now, in that capacitor space, Yageo brand, and for both capacitor and resistor, commodity product, mostly serving consumer and, mostly you will see in the region concentrated in Asia. You fast-forward to the first six months of this year, a dramatic change in the portfolio, a growth of 4x the revenue. Now we are looking at around $4 billion revenue.
The first six months actual results that we publish are up here. You see capacitor still remains the biggest source of revenue for Yageo Group. Make no mistake, inside that bucket, now we added the entire KEMET capacitor and MLCC product portfolio. We added our film and aluminum electrolytic product portfolio from KEMET brand, and most notably, we added our tantalum capacitor product portfolio, also under the KEMET brand.
We are now run-rating about $1.2 billion a year with magnetic products. In 2017, that number was zero. Yageo had no magnetic product offering only five years ago. Now, that has been built through a very aggressive yet very meticulous roadmap of M&A. It started with the Pulse acquisition around 2018. It built momentum with the KEMET acquisition, which included the Tokin branded magnetics.
Last, not least, it concluded with the acquisition of Chilisin and Chilisin sub-brand as early as a couple of years ago. When you combine all of this together, our product offering in the magnetic space can rival all the major players out there. The resistor business did not benefit M&A, but two things happened that are equally exciting in that group.
First, the business grew. You see the number here. Second, the product portfolio, the customers we address, the segments we play with change significantly. More specialized product, more into the right application in automotive, industrial, different geographies, much stronger engagement with the global distribution channel in North America, a completely different business model than from what it used to be. Last, of course, the sensor business.
Another area that was very attractive, high growth potential we believe is gonna be one of our core engine, and we decided to invest in M&A, and we created the fourth pillar to our portfolio. Geography-wise, a similar story. You go back to 2017, majority of the business, like Pierre mentioned, out of Asia.
Now, within that Asia space, most of it, Great China. You can see here barely a presence in Europe and in North America. Very attractive marketplace, markets that you have to be in if you wanna be successful in this industry. This year, the same 4x growth, obviously. Half of our business, more or less, is designed and is fulfilled outside of Asia. Equally split between Americas and EMEA, depending on the quarter, depending on the inventory situation in the channel. This is a massive change for Yageo Group.
The other half still happening in Asia. A lot of that business, though, is designed in North America and in Europe. Within that Asian business, no longer such a dependency on Great China. Japan and Korea, Southeast Asia combined, represents today $100 millions for revenue. You know, these days, having an infrastructure outside of Great China as an option, as an addition, is very, very critical.
Segment, when you produce a commodity component, you can make the point that eventually these components will enter almost any segment. But it's important to be reminded that when you only have a commodity component, the applications you touch inside the segment will be a fairly commoditized type of application. This is not the best place to be long term.
The first six months of this year, when we look at our segment, you can see such a tremendous growth in industrial, in computing and enterprise systems, which it's a segment that we're gonna go back and talk a little bit about, in automotive. The level of our transformation inside each segment is even more exciting.
The type of applications inside each segments that we now have access to, thanks to our relationship with our key customers, is very different and exciting. If I look at this segment and I think about the key players in the world, from a customer point of view, that are designing the new devices that are powering these new trends, all of them are connected with the Yageo Group. We have good contacts with all of them.
Now, in our industry, there is perhaps the most important event that takes place every couple of years, every other year, Electronica. It takes place about two months from now in November, mid-November in Munich, Germany. It's certainly the most important show for customers, OEM, EMS, distributors, suppliers from semiconductor, from magnetics, passives, is when they all come together to showcase themselves, showcase the product portfolio, engage with design engineer. It lasts a week, it feels like a month. There is 100 of meeting that are happening at the same time. It's a great opportunity for us to engage with customers, their executive team, their design team, follow up on projects, define action items that are gonna set our pace for the next few years.
Perhaps no better place like Electronica is a place that we can use to showcase the new reality of Yageo Group. This is the rendering of the booth Yageo Group will have this year in Electronica. It's a beautiful space. When I look at it, I feel very energized. I think that it tells our story. It communicates a story of a company that from humble origin was able, through perseverance and speed and desire, to really grow first within that confinement, and then having the possibility to go even outside and dare to do more and continue to grow. I like the fact that this booth will communicate that to our customers. If you happen to be in Munich in November, and you wanna join Electronica, and you're walking around, please come by. We'll be happy to see you.
I guarantee you that the booth will be way more crowded than what you see in this picture. I wanna leave you in my closing comments with a question. Why Yageo Group? Who is this question for? Everybody. Shareholders, analysts, the media, investor, possible investor, employee, possible employee, customers, distributors. Anybody could ask himself, why Yageo Group?
I believe that the answer to me evolves around four themes. I wanna start with scale, the concept of scale. You can be a very exciting company with a very interesting technology. You may even be able to engage with a customer and design that technology into a device successfully. If you cannot scale when that demand for that product picks up, then to me, you're not committed to the success of your customer. This is very important in our industry.
Nobody like Yageo can command that level of scale. Today, we have a great story that you will hear more about from Infineon and how we were able to partner not just for a successful design, but for scaling up with that demand. Being global is not just a slogan, it's a necessity to be successful.
When you engage with customers of the likes of Infineon and Cisco, you need to be engaged usually on three different levels. You need to have a good connection with the executive team, and that executive team rarely sits in one place. You need to be very connected with the design engineers, and typically those locations are different than the executive locations. Might be the same, but more often than not, they are not. Last, you need to be very connected where the product is gonna be built and manufactured.
You may have great connection at the executive level and no connection on, in the design engineer side. Not a success. You can be very connected at the executive level, extremely well-connected at design engineer, but when the product is launched and is in production, you may fail at the supply chain test.
The Yageo Group is truly global in numbers and in reality. The sales organization alone has about 1,000 people all over the world. There is no customer in no location we cannot go visit physically face to face. We have manufacturing location, as you heard, in more than 35 different countries, so the China Plus One concept for us become China Plus 34. Supply chain-wise, we can serve customers with a plenty of options from direct, EMS, distribution, a combination of the above.
Any company that is not capable of doing this will drop the ball some way along the way, and they will not be truly global. The Yageo Group is. What happens when you grow so much and you have scale and you're global, you need to be adapted for change. Change is something that is with us at all times. In this industry, I go back 20 years, and I think who were the leading companies in each segment.
Some of them seem to have such an incredible competitive advantage, you would think they will never go away. 20 years later, many of them are no longer leading in that space because the world changed, things changed. Why Yageo Group is because I believe Yageo demonstrated that we are not driven by change, we drive the change. Facts speaks for us.
We are a completely different company than five, six years ago, and that's not by chance. That's because we were open to change. This is because of our attitude toward change. As you do all of this, you need to stay agile. You need to have speed. Companies that become very big, even if successful, they tend to distance themselves from when the market change and when the decision has to happen.
Too much time, too many layers, too much bureaucracy. Yageo maintains this sense of entrepreneurial mindset that is key. Decisions are fast. Decision are taken based on numbers, based on trends we see and what we believe in, and once we commit, we go. All of this, to me, is the reason why Yageo Group Built into Tomorrow because we believe in tomorrow.
I don't know exactly how Yageo Group will look like tomorrow, five years from now, 10 years from now. I don't know that, but I know two things. The first thing I know is that it will be different than Yageo Group today. The second thing I know is that it will be bigger and better than the Yageo Group of today. With that, I wanna thank you again. I'm gonna officially kick off the session. Tom, back to you.
Thank you, Claudio. Thank you for the sharing. Throughout the entire afternoon, you'll hear from our speakers, you'll hear from our guest speakers, but at the same time, we wanted to hear from you that's in the audience as well. We prepared, towards the end of the session, a Q&A panel. What you have here, let me step aside so you can scan it, is a QR code through which you can submit any questions you might have for us, and we'll try to address as many of it as we can via the panel discussion. Now, with the amount of people here, I'm sure we can, you know, we can't get to everybody, but we're gonna do our very best. Next, I would like to introduce Professor Chris Lee from National Cheng Kung University's Electrical Engineering Department. Professor Lee will take us through a journey on the history of AI and give us some insights into where that journey is gonna take us to in the future. Without further Professor Lee.
Good afternoon, ladies and gentlemen. It's an honor to be here. Thank you very much, Pierre Mr. Chen, for the invitation. Well, I'm honored to be here to talk about the history of AI and why now. As you can actually see, Mr. Pierre Chen is our honorable alumni of NCKU. Mr. Pierre Chen is also, as you can see right now, the top leader in the industry, especially in the semiconductor industry. In 1980, when Mr. Pierre Chen graduated from NCKU, you were very proud of National Cheng Kung University. Today, National Cheng Kung University is very proud of you.
Yes. Thank you.
Going back to the history of what we have, as our senior head of business was saying, we see that technology is actually bringing fast changes to the world. If we go back to history, we see that in the first Industrial Revolution, when James Watt invented the steam engine. That was actually the beginning of energy.
Also in United States, you see that Franklin discovered electricity. Both energy and electricity actually brought forth automation. Back then, you see that a lot of things were artisan craftsmen all over the world, in Europe, in America, even in Asia. You see that a lot of these technology were passed on from father to son, and also you see that the workplace was the family.
As a result of the Industrial Revolution, people move to the factory with automation. You see that you have to get re-educated to be back to the workplace. This actually changed not only education, it changed the political structure, it changed the economical structure, it also changed the social structure of the world. Recently, we witness AI, big data. You see that big data actually brought forth even faster changes. McKinsey was saying that due to AI, you have 10 times as much of the speed of changes is 10 times faster, and the scale will be 300 times faster.
We witness today on AI and semiconductor. We see that we have ChatGPT with generative AI, which is being put on, say for example, the cloud, the data centers, the edge, the mobile devices. These were all making things possible, driving for yet more demand for semiconductor. We're here today to witness how Yageo is on top of everything within the world of AI.
Let's go back to history again. In 1969, when Armstrong set foot on the moon, you see that to the left here, to my left here, you see we have computers. This was the computer which actually helped Armstrong set foot on the moon. That was also made possible because of math, because of an algorithm called Kalman filter. In the beginning, we talked about computer, we talked about communication, we talked about control.
That was Mr. Professor Kalman, who was a professor in University of Florida, who went to a conference and knew a Russian mathematician. Together, they invented what we call the Kalman filter, which was helping Armstrong set foot, and mankind set foot on the moon. Now you can see that the circuits that we were looking at, these are the copper wires, and these two are two NOR gates that we actually saw.
But that primitive computer, which is still sitting in MIT's museum, was actually able to help mankind set foot on the moon. You see that that's one small step for a man and one giant leap for mankind. I personally would like to say that this is one small step for an engineer and a giant leap for engineering. Okay. If we look at the overall landscape or trend of the global AI and semiconductor, we see that we started in the 1980s with IBM's supercomputer, and then, quite a few years back, we were looking at AlphaGo.
Today, we're looking at ChatGPT, not only ChatGPT 3, but also ChatGPT 4. Now, as Mr. Chen was rightfully saying, a lot of these algorithms, a lot of these, software are driving for yet a more complicated, hardware, as we can see up there. We were talking about AI. I think the fundamentals of English is ABC, the alphabets. In AI, the fundamentals are also ABC, but A stands for algorithm, B stands for big data, and C stands for computing.
Now let's take a look at the algorithm and how it started. Engineers try to solve problems. Now, we started out, as you can see, with computers, and then we have control, we have communication. Now we're trying to see if we can automate things. People started trying to automate things using or mimicking how the brain actually work, how the brain is thinking.
The computer in Chinese is the electronic brain. How does it look if we have the neurons? That's why in 1943, the artificial neuron was invented. Okay? Also, many people know about Jensen Huang and also Lisa Su. They say, okay, a lot of these hardware gurus actually had their origins in Taiwan. I would like to let you know that AI also had its origin in Taiwan, because one of the guru of AI, initially we have pattern recognition, is Professor King-Sun Fu, who is also from Taiwan. Okay?
He was the first who actually organized the first international conference on pattern recognition in 1973. He was also the inventor of Syntactic Pattern Recognition . Okay? When we talked about AI, pattern recognition actually brought forth optical character recognition in 1970. Today, you're using a lot of this in license plate screening when you park in the parking lot. Okay?
That was originating from pattern recognition. Back then, when you talk about AI, they were merely expert systems or software which were mimicking the judgment of human beings. Later on, we started having neural networks, so we were talking about the backpropagation. This brought forth what we call convolutional neural network, which everybody is so familiar with today on deep learning, and also the recurrent neural network.
This was actually possible in making the invention, such as, for example, for image recognition, Google Lens and also voice recognition. Amazon says that Alexa on the smart speaker, you know. Now, we also were talking about parallel computing. A lot of things were done in parallel in vectors.
The words were grouped into vectors. That was the invention of the machine translation. As you have all seen, AI is everywhere when we walk into this room for Yageo, and we were just witnessing the fruitful invention of AI, which is machine translation. We were having translation in the beginning from Chinese to English. Okay. That is also AI.
Now, you can also see that, in 2017, we started having the transformers, which led then to the large language models. OpenAI had their ChatGPT, then, Google had their Bard, and also Microsoft have their MAI. This was actually a big, the, some of the timeline that we see in AI today. Now, pattern recognition, as we said, when humans were trying to mimic the brain, they were trying to look at the neocortex or the outer layer of our human brain, which has the function of recognizing specific patterns which will help you differentiate or to make decision. Now, we also started with, say, for example, the synthetic pattern recognition, which was invented by, Professor King-Sun Fu. You see that, for optical character recognition, okay, synthetic pattern recognition has its roots, looking at the English grammar.
Whenever we talk about the contextual information, say, for example, if you're trying to recognize your characters. Now, if you are trying to recognize the vowel O and I, if you can't make a decision because sometimes your handwriting is so cursive. You look not only at the character, you look at an upper level of the data, or another higher level at the word level.
You look at the characters beside it. We look at love. We see that this is L-O-V-E, but there's also another possibility, which is L-I-V-E. Okay? Now, but still you cannot make a decision. Let's go one level up to the sentence level, to the syntax level of the sentence, and we say, "I love you." It doesn't make sense if you see, "I live you," right?
With this contextual information at the sentence level, we're actually able to see that this vowel is actually O. Today is synthetic pattern recognition the beginning of large language models. A lot of these contextual information, a lot of these technology are still being used in large language models today when we look at the transformers. Now, another name for pattern recognition is machine learning.
Now, in the beginning, machines actually learn through the coded instructions of humans, which will actually help you look at the different features and so that you can make decisions. Now, if the ear is trying to tell the difference, we try to listen to the harmonics in sound. Say, for example, my voice is pretty low and a lady's voice is rather high-pitched.
Those are the features which are being picked up by the human ear for the brain to make the decision. When you look at images, the features that you look at are the colors, the size, the shape, and even the texture. Okay? This actually help you make decisions. What about the decisions? I think we have a lot of top-level managers here. Of course, you can make decisions using neural networks.
Back then in machine learning, we also have other ways of making decisions. I'm sure the managers know about Bayes' rule because it helps you make decision using hypothesis testing. A very good manager, if we go back to Bayes' rule, have a lot of experience. That actually gives them, a good a priori probability model. A good manager will come up with a good a priori probability model.
From your staff members, you do marketing surveys, you have a lot of data, you update this a priori probability model, and then you go to the a posteriori model. That is prediction. That is classification. That is decision-making. That is how a lot of the top-level managers, that's how the generals, that's how the medical doctors make decisions based on the mentality of Bayesian learning. That was also something which was before the neural network. Of course, the neural network is also another way of making decisions.
Now, another name today that we use for pattern recognition and machine learning is called explainable AI. Well, it's basically the same technology, but it is important because we need to be able to understand these AI models because they're important, especially in medical applications. Okay? This bring us, of course, we talk about data mining.
Well, data mining, this brings us to the neural networks. The neural network is actually mathematically modeling how the human brain's neurons are connected or synapse together into a network. This network is called a neural network, and we make decisions. One of the most common neural network that we use is called the multilayer perceptron.
Now, of course, deep learning comes. Deep learning is learning, right? You're learning actually by updating your errors through incorrect past experience, and you have to remember that. I remember my son is, of course, grown up now. When my son was young, my wife was trying to teach my son. Okay? That was training. My son was in his infancy. He was a baby.
My wife would say, [Non-English content ]. "Call Mom." "Ma," and my son would say, "Muh." There is actually a discrepancy or error between what the teacher wants the student, which is my son, to learn. Back and forth, my wife repeated this and then say, [Non-English content ]. My son improves. Through back and forth training, which is in the path or the backpropagation that we were talking about, the student learns through the label data that we were talking about. Of course, you also have the multilayer perceptron at the back of deep learning, which is also helping you make decisions. When you do the testing, you would like to put the data into the trained AI model to see what classification or to see how the machine is actually making decision. Okay.
Now, we're also talking about the generative AI. Now, I would say data mining, big data. Today, some people may think that this is mining gold, right? Mining gold, well, this is orange, and this is also gold. Generative AI would probably generate gold for us because big data is gold today. Big data is crude oil.
We see that we have the transformers. Now, the transformer actually has two parts. It try to decompose, break the stuff into smaller pieces so that you can understand them more, and then it will also put them back together. Okay? For those of us who are chemistry, especially for Yageo, a lot of experts in chemistry for applied material, you're breaking something into smaller pieces.
You break them down into the elements, and then you break them down into the molecules, and then you break them down into the atoms. That is the encoding part or the analysis part, which is what CNN is doing. It breaks things down for you. In the decoding part, it puts the stuff back for you. If you start combining them, if it's physical chemistry, it becomes material. If it's biochemistry, it becomes nutrition. This is the same way that we are trying to analyze things, say, for example, for transformer, when we do from text to image conversion. I was looking at MediaTek demo for their transformer. They were typing in text. The first word was orange. The orange was broken down, and then it was synthesized into an image onto the screen.
We saw an orange on the screen. The second word was flurry, fur. We see that the screen pops out with orange fur. The third word was cat. You have a orange, a cat with orange fur. Okay? You see that this is transforming, like this is a large language model which transform text to images. It generates images. Okay? We do know that training and inference is very computational intensive. It requires a lot of computational power. It, of course, definitely consumes huge power that we're actually looking at. Today, when you're looking at AI, we see that we can use AI to study the galaxy for the scientists.
We can use AI to predict the weather, which is then able to predict disasters, because that will save a lot of lives and a lot of money when we try to predict the disasters. AI is also used in robotics, for healthcare, and AI is also used, say, for example, in neuroplexors, which studies the 3D structure so that you can have new medicine.
Now, AI can also be in digital twins because Taiwan is very strong in manufacturing for smart makers. If you have a digital twin, you can do a much better job with this digital twin that we have here. Now, AI also can go into augmented reality for smart driving, for intelligent transportation, and this is how AI is assisting to make the driver smarter and to be aware of more information in the car, in automotive.
Now, AI also, large language model, is driving for more ICs, more not only ICs, for other things in semiconductor. Today, we're witnessing the flourishing, we're embracing the world of AI in semiconductor. Okay? Now, we talked about A. Let's come back. The fundamentals of AI includes algorithm. We talk about algorithm. Let's see big data, and also let's talk about computing. Now, you see that, because the ChatGPT and also, the large language models, in order to train them, the number of parameters get to become bigger and bigger. All these curves that you can actually see here. Today, with ChatGPT-4, you can see that these models may actually even go to 1 trillion parameters that you see.
You can then imagine that these large language models are energy hungry, and the energy needed to train a ChatGPT-4 is sufficient to power 50 American homes for one century. You can actually see how much power is needed in training them. Well, to be honest, a lot of these algorithms we already had back in the 1980s. 1970s, we started out with pattern recognition, and then we have the CNNs.
They were ready back then. In order to have this, we were saying that we're mining big data. We're mining gold, okay? With the results of the big data, you see because of the boom of the internet, you see that, because people are using text, speech, and video, these unstructured data is getting bigger and bigger together with the structured data such as in databases.
These are exponential growth that we actually see here. That's why we're not doing any new algorithms, but it was the possibility, it was the availability of the big data that has made a lot of AI happening. Not only that, you also have to have very strong computational power. Hey, I need to train 1 trillion parameters, and not only on the cloud, you also now have to do it on your edge, on your cell phone, at your fingertips. We started out, as you can see, this actually went through 400 years through Moore's Law. We started out with mechanical machines and then came the ENIAC machine, one of the first machine that was coming out, which was using vacuum tubes.
We started having transistors where Cray and those computers, Sun Workstations, and also Apple's Macintosh was coming out. Right around then, we started looking at the integrated circuits and we have the Pentium PC. Today, everybody witness GPUs primarily from NVIDIA. We're also transitioning to heterogeneous computer.
As we were saying, the boom for AI actually drives more needs for semiconductor design. You see that as the chips gets to be larger, they're denser. The process get to be smaller. TSMC, Samsung, and also Intel, they become denser. The AI models get so big, and the energy is soaring. This is exactly why we what we're looking at.
If you look back again, we had the algorithms in the 1980s, and then it was around the 1990s as a result of internet, where big data actually took off. We start having. Well, we weren't quite ready. We have A, B, but we cannot do so without the C, which is computing, because you need to have the GPUs. You need to have the big, the good, edge devices, which has very strong computational power. It was not till the 2010s where we start having the computer which was ready. This is why you see today on the density for MediaTek, and we see that we have to start designing things from the algorithmic side because you see that LoRA is actually trying to make the algorithm simpler.
Also at the GPU side, we need to have a very high computational power, which is represented by the number of transistors that we see here. We need to do parallel computing, which is why we have GPUs. The amount of data that you need to transfer is the most power consuming. The second to data transfer which consumes most consuming power part is the memory. These are all the problems that we need to solve. Hey, how have we improved after more than half a century? Armstrong set foot on the moon. Now you have SpaceX. Before, these were the circuits that we have seen with copper wires. Now we're looking at 3 nm for smartphone, for high-performance PCs. We're so proud once again today.
You see that Yageo has extended the products into computers, into communications, into control, which is what I see in industry. Today we also have circuits, automotive car, and what about the E that we actually have? The car. The product has actually went into six Cs that we're looking at. Once again, we're so proud, and let's change the world together. Thank you.
Thank you, Professor Lee. I didn't know this, but how much of AI actually you know, started here in Taiwan, so I'm, you know, very proud of that. You know, I'm excited about this recent convergence of data and compute that's really accelerated AI over the last few years. Awesome. Thank you. One quick logistical point. We are running a bit behind on time. Originally, we had a break planned. I think, you know, we're gonna take that out. Refreshments are available outside, so feel free during the presentations, you know, to step out and then come back in, but we're not gonna have a formal break, you know, due to the time. Sorry. I'll leave. I'll give it another couple seconds if anyone wants to scan the QR code to submit the questions. Okay.
Don't worry, we have it, we're gonna have this at the end of every presentation. I've been a closet techie most of my life, so I always feel like a kid in a candy store when I hear our Chief Technology Officer talk about our products. I want to share that excitement with you now and invite up Dr. Philip Lessner to talk further about our products here at Yageo. Thank you.
All right. Thank you, Tom. Okay. Yageo technology for AI. What are we doing? First, let me start with a little introduction here. Professor Lee, I think, covered well. AI is transformational for work and life. Many different applications, machine translation, voice recognition, Industry 4.0, autonomous vehicles, of course, the generative AI that's taken off over the past few years. Just to give you some figures, AI is expected to have an economic impact of $19.9 trillion through 2030. And some are predicting that it will account for about 3.5% of the global GDP. That, that's a...
You know, AI could have a tremendous impact, but what we have to ask ourselves is what could stop the growth of AI? Is there something that could stop this AI train from growing, and then, you know, it won't reach its potential, it won't reach its economic potential. Really there's two things, there are two fundamental issues that can potentially stop the growth of AI. One of those issues is we may not have enough power available, and the other issue is we may run out of data to train AI.
Yageo is not involved in the data portion of this, but we are really fundamentally with our components involved in the power portion of this. That's what this talk is really about, is really, you know, how is Yageo helping this power part of the equation so AI can continue to grow in the future. The world's taking, you know, notice of what's of this power, potential power, potential energy deficit.
You know, I won't go read all this, but you can see the headlines about potential data centers that can't be built, about the AI industry's thirst for power, AI exhausting the power grid, and of course, you know, all these companies like Google and Meta, OpenAI that are involved in AI all have green goals, and so they may not meet their goals, because they may have to use sources of power that we're trying to shut down, like coal, and oil and things like that. So definitely an issue here. So I just want to show a graph that shows the issue more graphically. You can see between 2011 and about 2021, there was a moderate growth of data center energy use.
Starting in 2021 or 2022, when generative AI came on the scene, you can see that slope has drastically increased, and there's a prediction that by the year 2029 or 2030, you know, data centers may use 4.5% of the world's power, up from about 1%-2% today. So that's a big issue. That's a tremendous amount of power. As the headline said in the previous slide, we may run out of power plants, we may run out of sites to put these data centers, and that can potentially slow down or stop the growth of AI. Just for reference, Taiwan's yearly electricity use is about 280 TWh. So now data center energy use has exceeded the entire energy use of Taiwan.
Why so much energy being used these days? That's really because the generative AI applications like ChatGPT and others are much more energy-hungry than the traditional applications in the data center. You can see a Google search uses about half a watt-hour per query. ChatGPT uses about 6x of that. If we go to using some type of AI enhanced search, it could be using 16x the energy of a standard Google search. That's what's causing the slope of that energy use line to go up, and that's what's causing the potential crisis in energy availability to power these data centers.
The reason behind the increased energy use is really the processors that we need to power the new AI applications are much more power and energy-hungry. If you go back 10 or 15 years, processors used 100 to 200 watts of energy. Now the NVIDIA H100 GPU, which is the processor that powers most of the generative AI and data centers, uses 700 W of power, and the next generation Blackwell is gonna use about 1,000 W of power, kilowatt of power. Tremendous increase in the power usage of these processors. I here show a NVIDIA H100 GPU board, a motherboard.
What I want you to take away from this is the processor, and the memory is in the middle. All the rest of the components on this board, many of which Yageo sells, are really power delivery, and the power delivery components probably represent about 75% of the real estate on these motherboards. Of course, we need 1,000 or 10,000 of these boards and data centers to power the generative AI and the new AI applications. That's a tremendous amount of power usage.
In addition to the power that's needed to e nergy that's needed to power the processors, 99% of the energy used by these processors ends up as waste heat, and that waste heat's gotta be removed from the data centers. The statistic is that 10%-40% of the energy that goes into powering data centers is energy that's used to remove waste heat.
It's not energy used for computation, it's energy used to remove the heat from computation via fans and pumps and stuff. There's also water used to cool the air that's used to cool the equipment. There's a tremendous amount of water used also to remove all this heat. You see the statistic there. That's also becoming an issue.
In addition to the energy use, water use is starting to become an issue in some locations. How do we reduce the energy consumption of AI? How do we get back and reduce the slope of that curve so we don't end up in a crisis? Well, there are several ways to do that. Two ways are better computing hardware, better software algorithms.
Really the fundamental thing here is that energy is power times time. Even though these new processors use a lot more power, if we can reduce the time of computation through better computing hardware and better software algorithms, we can actually reduce the amount of energy. Second is efficient power conversion, and that's where Yageo's components come in. I'll talk about that in some depth.
Third is new methods of thermal management. We need to take that 10%-40% overhead and that's used for heat removal and reduce that down to a lower number. I'll touch on that at the end of this talk. Reducing computation time. There's been a lot of work on that in the past decade or two.
Professor Lee touched on the move to heterogeneous computing, the move from pure CPU computing to CPU Plus GPU computing. The reason that works is that these machine learning applications, deep learning applications are conducive to parallel processing. The many cores of a GPU can speed up the computation. You can connect GPUs together in the data center.
As I said, 1,000 or 10,000 of these GPUs are connected together to train and to provide the results of these generative AI computations. You can design better hardware. There's certain operations for deep learning and for transformers and generative AI that are performed, and a lot of them are matrix and vector operations.
You can encode those operations directly in hardware. Professor Lee also mentioned that a lot of the energy and computation is due to transfer from memory to the processing unit. There's special High Bandwidth Memory that's been developed that can more efficiently transfer data between the memory and the processor.
That's had a big impact on computation time and energy usage. Finally, improved process and packaging. Professor Lee also touched on going from Apollo to today's 3 nm process at TSMC. The shrinking of transistors, and also I think as probably most of you know, the advanced packaging, silicon interposers have all made an impact on improving the processing speed. In addition to pure hardware, we also have software. Actually one of the biggest contributions to shrinking the processing time and therefore the energy usage is going to lower precision numbers.
I don't have time to go into all the details, but basically you can train the model with high precision numbers and then deploy the model with lower precision numbers, and that decreases the number of memory transfers that you have to make and therefore decreases the computation time.
Finally, these, as Professor Lee said, these models have billions or trillions of parameters. If you can get rid of some of the parameters, it's called pruning, then there's less parameters to transfer in between the memory and the processor. You can speed up the computation, you can have less memory transfers and therefore use less energy. Combined hardware and software energy reduction over the last decade has really given a 1,000x improvement in computation time. Again, energy is power times time.
The 1,000x improvement has reduced the slope of that curve. Hopefully, as we move into the generative AI era, there can be similar improvements that will, you know, help us not use so much energy for the computation. Now I'd like to look at power distribution in the data center. Again, this is where Yageo components are playing in the equation.
Basically, the power starts out in the grid. That's very high voltage. That's usually stepped down to some intermediate voltage and then stepped down further to what's called low voltage AC. That's the 120-440 volts. That's where Yageo components begin to play in the power conversion equation. Then that AC is converted to DC.
That used to be 12 volts that was distributed throughout the data center. There's been a move to 48 volts from 12 volts, and I'll cover why that is in a subsequent slide. Then that 48 volts is stepped down to the voltages needed by the processors and the auxiliary equipment. For the processors, it's usually an intermediate 12-volt step down and then another step down to the 1 or less than 1 volt required for the CPUs, the GPUs, and the custom processors. Then, you know, there's various other loads like 12-volt, 3.3, 5-volt, 28-volt for the fans and for other auxiliary type of stuff.
Let's talk about the Yageo components. I do wanna say, due to the power levels and energy levels that are used in the data center, even a 0.5% or 1% gain in efficiency in the power conversion can have a tremendous impact on the total power usage when you're talking about 100 or 1,000 TWh . You know, multiply that by 1%, you can see that the power usage can be, you know, greatly impacted. The AC-to-DC 48-volt power conversion, so there's usually several stages to that. There's an EMI filter, power factor correction, and then the step-down to the low-voltage DC.
We have many components that are used here, including film capacitors, inductors, transformers, sometimes, you know, ceramic MLCCs, and aluminum and tantalum capacitors. This is really the realm of the film capacitors and the Yageo magnetic components. I just wanna say that, you know, the move has been to try to get more than 90% power efficiency in this stage of conversion, and there's actually power standards around that. Now 48 volts. Why 48 volts? Well, it really comes from Ohm's law. Power is voltage times current. Power loss is proportional to the square of the current times the resistance in the distribution system.
If you go from 12 to 48 volts, you're increasing the voltage by 4x, you're decreasing the current by 4x, and therefore you're decreasing the power losses by 16x. That's a tremendous improvement in power loss. There's many different topologies that have been developed for 48 volt power conversion, so I'm just showing one of them here. This is the Google Switched Tank Converter developed several years back. It uses several Yageo components. It uses our Pulse inductors, and it also uses some special KEMET MLCCs, this U2J dielectric, which we developed actually partially for this project.
It's a low loss class one dielectric, and we actually put capacitors together in what's called this KEMET connect configuration and rotate them on the board to a low loss configuration. We make many other products for 48-volt. Not only the U2J capacitors, but we also have 75-volt tantalum polymer capacitors and a variety of 60-, 63-volt aluminum polymer capacitors that are used in these type of systems. Just here on the right side of the slide, this is the improvement over the standard topology by going to the Switched Tank Converter. That was the reason to do that, to get an improvement in efficiency. Again, you know, 1% gain in efficiency is a tremendous reduction in energy usage.
Now if you look at the last stage of power conversion, the point of load, from 12 volts to 1 volt or less than 1 volt, that needs to be very efficient. Here, the currents are extremely high. If we're talking a 1,000 W graphics card, GPU, at 1 volt, then we're talking 1,000 amps of current. We have extremely high currents.
We need high efficiency, and the voltage regulation needs to be extremely tight in the 10 mV. Extremely demanding application here. There's been an evolution over the years, from the sort of the standard buck converter that didn't provide enough current. There was the multi-phase buck converter with low-loss inductors.
Over the years, Yageo Group has followed the trend of these advanced voltage converters. We developed the ferrite core power bead, which is a very low loss inductor for the multi-phase buck converter. In recent years, the TLVR or trans-inductor voltage regulator has been proposed and begin to be used in some applications.
The Pulse brand of Yageo developed the TLVR inductor. Now we have a new product called NANOMET, which is a special material, nanocrystalline iron, which is capable of very high currents with very low losses. My colleague, Fumihiro Katakura, will talk about that in more detail, so I won't go into that. In addition to the inductors, we continue to develop very low loss, very low ESR capacitors for these applications.
It didn't highlight it on the NVIDIA board, but there's many tantalum and aluminum polymer capacitors, and the KEMET brand of Yageo has these. I just show a few here, and my colleague, Travis Ashburn, will talk in more detail about those products. Let's talk about some future directions. One is vertical power delivery. Even you want to move this final stage of power conversion as close as possible to the processor. Even a few centimeters away horizontally, you still gotta have a lot of resistive losses. You want to move the power converter right underneath, if possible.
Showing here on the right-hand side of the slide, actually an Infineon module with our NANOMET inductor, that's suitable for this vertical power delivery. Another thing is, as I mentioned, cooling overhead is a big part of the energy use. Air cooling is traditionally used. That's no longer efficient enough to remove all the heat and takes a lot of energy.
There's been a move to direct liquid cooling, bringing liquid directly to the processors. Eventually, there may be immersion cooling, where the motherboards are immersed in a cooling liquid, and that creates all sorts of potential issues for compatibility of all the components with these dielectric immersion liquids.
Yageo's working closely with our CPU and GPU partners to study the compatibility of our components with these immersion liquids. The good news is most of them are pretty compatible with a few exceptions, and there we're doing a development. Finally, wide bandgap semiconductors. I think Professor Lee also mentioned that, so that's silicon carbide and gallium nitride because they operate at higher frequency and higher temperatures and have potentially lower resistance at higher voltages. They're capable of much higher conversion efficiencies. That requires new component development, requires us to develop, for example, lower loss film capacitors that operate at higher frequencies. We're developing components for this trend.
Finally, let me just close with a quick summary. Energy and resource use, I hope I've convinced you, is a critical issue that must be solved to continue to advance AI. It's really a puzzle, you know, engineering is usually never one solution. It's usually a series of solutions that you put together, optimized hardware, new algorithms, efficient cooling, and then where Yageo comes in, efficient power conversion and the components that are required to enable the efficient power conversion. I'd like to leave you with the Yageo Group has the components and the technology and the material science to support the development of these efficient power converters. Thank you very much.
Thank you, Phil. I always learn something when I listen to you. Our next speaker will go into the fun world of inductors, and I think Phil had already talked about that. I would like to introduce our Senior Vice President, Fumihiro Katakura.
Thank you, Tom. Today I'm proud of talking about new material NANOMET for the inductors. We have many inductor structure, and these are the picture of our inductor portfolio. Even though we have a different structure, constructions, but we utilize the magnetic core material to increase the energy storage.
However, that introduce a limitation to the current due to the saturation and create a switching loss. To enable high current, high density, high efficiency design, the core material becomes the limiting factor in the inductor performance and affect overall circuit performance. This is a picture of the board, and you can see the many inductors besides the IC, and it is using many space of the board. It is quite important that we have a better performance in core materials and required for the AI applications.
I want to talk about a little bit about the history of the material development by Yageo. We started in 1938 to invent the development of Tohoku University, Dr. Hakaru Masumoto, who invented Sendust. This is a iron-based material, and we use this Sendust material to make the telephone magnetic card, to make the flex suppressor noise suppression sheet, and Sendust inductors.
In 2009, with Dr. Makino, a Japanese Tohoku University professor, he invented the NANOMET. The NANOMET is nano metal material. We, together with Dr. Makino, we developed the NANOMET foil. In 2013, we moved on to the powder side. With this powder, it's enable us to introduce the new NANOMET powder inductors.
This is the TPI structure, and it was adopted by the customer for the power module, and it create better efficiency of the power. Now, this is a picture of Dr. Honda. Dr. Honda from Tohoku University in Japan, he invented the KS magnet. With that, we started the operation in Japan, Miyagi Prefecture in 1938, and we are continuing the material development to support the inductors and other components. Okay, what is the NANOMET? I will talk about the detail about NANOMET later, but it can be used to support these inductors for power module or the boost inductors for the automotive. This is a product which we are producing in our facility in Shiroishi in Japan. In addition to that, we are working on the new development, so-called the flake composite.
Flake composite is flake stacking structure, so you can put inductor into the PCB board, and will give you more efficient power for the power module. We have a nano analysis and simulation technology within our facility. Okay, what is NANOMET? We have four different key technologies in the NANOMET.
NANOMET is original composition made by high iron content, with the easiness of compatibility including copper for nanocrystallization. Also, we do the rapid quenching. It is rare that we start from the material as a component supplier, but we do the rapid quenching. So we do the hot press mold with the high insulation and high density process. Heat treatment for the nano powder. These are the four key technologies we introduced with this NANOMET.
Looking at this graph, this is the conventional material, manganese zinc ferrite or Permalloy or Sendust. If you go to the right upper side, this is NANOMET. If you go to the right-hand side, it means the high permeability and high Bs. We introduced the powder in 2017, and it was embedded into the power module, the vertical stack power module.
This is several different material which we are using. The ferrite material with the manganese zinc, and metal composite with iron silicon chrome, and flake composite with the iron silicon aluminum. The NANOMET, we have six different composition: iron, silicon, boron, tin, copper, chrome. Compared to the conventional material like a metal composite, permeability is four times higher.
For the saturation flux density, Bs is almost 3 times 1.3. The mu t is very stable. Core loss is much lower than the metal composite, which is almost one-tenth. NANOMET is very good material to give the efficiency to the inductors. If you look at this side, the x-axis is core Bs, and the y-axis is core loss. If you go to the right bottom, this is a very good material. Compared to the other iron-based material like iron amorphous or carbon iron, NANOMET is better material with our invention. The benefits of the NANOMET material makes inductor smaller, lower package size, high permeability, and high saturation current, low core loss, and stable temperature performance.
Application is very good suit for the power module for AI, point of loads, data center server, storage, AI learning machine, and supercomputers. This is a comparison of NANOMET material with the metal composite or ferrite. The gray line at the bottom here is the metal composite. The metal composite inductance is a little bit lower than the NANOMET.
The ferrite material start from the same level of the inductance, but the saturation curve is poor compared to the NANOMET. Also the core loss is much lower than the metal composite inductors and almost the same as the ferrite. This is the future roadmap of our NANOMET material. We have 1.3 saturation flux density at the gray ball, and permeability 100.
In 2025 next year, we are going to introduce the low mu product, which will give you the high inductance at the high current. 2026 first half, we will introduce the low core loss with high frequency, 2 MHz. The 2026 second half, super high mu 150 permeability. 2027, high Bs. Our development road will not stop and support the future of this business to support this component.
Lastly, I want to say that NANOMET is very special material which can give you the better performance in the molded inductors with high efficiency in the power conversion systems, with the improvement of the saturation flux density and the core loss reduction. This is we are targeting to introduce to the data center servers, AI, automotive, and computing, and game PC, laptop. Thank you very much. This is end of my presentation. Thank you very much.
Thank you. I think another great example of how through innovation, you know, we're satisfying the needs of AI. Next, I would like to bring up our Senior Vice President to talk about the unique qualities of polymer capacitors. Thank you.
All right. Good afternoon.
Good afternoon.
Okay. Always excited to talk about the tantalum business, so we'll kind of shoot right into that. First thing I wanna talk about is just capacitors. I know Claudio mentioned capacitors and lots of different things that Yageo does. Obviously, Hiro just covered inductors and the material sets. Tantalum falls in the capacitor box. Capacitors, as you guys know, are passive components. They're ubiquitous. They're everywhere in just about any circuit board that you find.
Obviously, Yageo has a full range of capacitor products. You know, today, as we're focused on AI, things that we're talking about high power and high current circuit designs, what those designs require are really the highest capacitance per unit volume. They require low equivalent series resistance. Again, that ESR. We talk about power consumption and heat, so that's important.
Those are characteristics that really fit well with what we call tantalum polymer capacitors. The selection of tantalum polymer capacitors, and really you can see that here on this bottom design, where circuit board space or height requirements or temperature or special electrical requirements come into play, that's when our customers are gonna start choosing to use tantalum capacitor products.
Again, Phil touched on this. I mean, we see that increasing power consumption and thermal management in AI applications, and really what that is doing is really we're using those application requirements to help drive our technology, and I'm gonna show you how we do that in the tantalum polymer space. First of all, I wanna give a little bit of background about the tantalum business unit. In 2023, about $568 million.
We'll have grown in 2024, but that's where we were last year. We are number one market share in the world at about 40% market share based on revenue, and we are the world's largest tantalum polymer supplier. I do also wanna point out, if you look at our tantalum polymer, our five-year growth, we're running at about 10%.
Down here, very important, we have a very large R&D team across the globe. We have three global sites, one in the U.S., we have one in China, and also in Japan. Again, three R&D centers that really act globally with a huge focus on our tantalum polymer development activities. Okay. KEMET, Yageo now, very unique tantalum company. Been in business really since 1919.
Over 100 years of capacitor and tantalum capacitor experience. Since 1959, we made the first tantalum solid capacitor. Again, over 60 years of making tantalum capacitors. A lot of knowledge in this tantalum group. Also since 1997, we've been making tantalum polymer capacitors. Again, that's about over 25 years of doing that as well.
Couple more important things here. In 2012, we purchased a tantalum powder manufacturing company. You know, Yageo Tantalum is the only vertically integrated tantalum capacitor company in the world. Not only do we purchase tantalum powders from suppliers, we also make that in-house. We're very involved in the total supply chain for tantalum, and we're the only supplier to do so. Then of course, important in 2017, we purchased NEC TOKIN.
We purchased their tantalum polymer capacitor business as well as supercapacitor business. And that's been a great marriage of two companies. They both have very, very solid strengths in the tantalum polymer space. And really, we've leveraged that between each other, both from a business perspective but also technologically. Okay. Again, a lot of experience in the tantalum business.
We won't touch on these things except to say, you know, Phil's already covered this. Chris has covered this. Thank you so much, guys. These are the drivers that we see. I mean, obviously these AI, as we discussed today, and then I'm gonna talk about what we're doing to keep up with these AI and technology changes.
Again, the important thing here, as these algorithms and custom processors, as they again speed up in their frequency of calculations and their need for more and more energy, the thermal management of that and the power management are very key, and we're gonna touch on what we're doing to make products for this space.
Here on the far left, what does the application require? What's the capacitor requirement? And then how do we manage, and what are we doing technically to go after these application spaces? First of all, the world needs smaller components, so we need more space on the board. Space is precious and costly, so we're developing smaller components, getting higher capacitance per unit volume, lower loss.
Talked about that today, where a lot of the power is actually turning into heat, so we don't want to do that. We need lower ESR, lower thermal resistance. Higher voltage. I think Phil touched on the 48-volt. As we wanna go to higher voltages, we need higher voltage performance in parts and ratings from our capacitors.
Then lastly, just higher reliability. How do we get longer life in our products, and how do we increase the temperature capability of our products? Okay. I'm not gonna go into excruciating detail on these, but I do wanna touch how do we increase the capacitance per unit volume? You can see at the top, this is our package of tantalum capacitors.
On the far left, you'll see a traditional capacitor that uses a traditional lead frame for the positive and negative connection. Again, lots of wasted space. That black anode is the tantalum, and that delivers capacitance. We need to use more space. We've done that in the middle, so increased capacitance about 40% by changing that lead frame design.
Then, of course, as we look to the future, the way to get the most capacitance out of that space is to take the lead frame out of that package. A lot of development and activity to get more capacitance into that same space. We're also very focused on powder, so that's the fundamental tantalum that you see. Think of tantalum powder as you would think of a sponge.
The more surface area of that sponge, that more surface area that we can get, you can see here, you know, over the last 20 years, we've increased that surface area anywhere from 5 to 6 times. That means, again, we can get more capacitance out of that same size product, and we're just continuing to do that and working with our suppliers to do so.
Lower ESR and thermal resistance. One of the unique things, if you're in electrical engineering, you know when you put parts in parallel, you can cut that ESR in half. We can do that with different anode designs, and we're doing that in the AI space today. Today, if you look at a single element traditional capacitor, we can get that resistance down to about 9 milliohms.
We can put two capacitors in the same package and get down to less than 4 milliohms. We also have an aluminum series. Again, there's 6 elements here, and we can get down to 3 milliohms. Again, driving packaging technology so that we can meet what that customer requirement is and to reduce those losses in the circuit.
Again, our cathode system, which is really our negative system, that we're continuing to always looking for ways, whether it's carbon and silver or the polymer itself. We're looking to always introduce new systems that will help do that. You can see here, this was a project from a couple years ago, where you know, if you look at the black line, that's our previous ESR, and you look at our red line, it's the newer one.
We're always trying to create new material sets, as Hiro's doing, new material sets that give you lower loss in these circuits. Again, the dielectric is very, very important. For tantalum, the dielectric can really be measured in angstroms, so very, very thin, extremely thin. Having a very, very perfect dielectric is extremely important.
Oxygen is really not a good product to have when you're developing tantalum powders, so we're trying to reduce that in the processing. What you see here is the advancements that we make in getting oxygen out and other impurities out of the powder manufacturing process allows us to have a much better dielectric system. What that allows us to do is to have a much higher voltage rating of the product. Also, some cathode system work that we've done.
If you look here at the bottom left, the black line is a process that we've been running now for just about 25 years. About 10 years ago, we developed a system, we call it our slurry system, where we've actually changed our polymerization scheme. Now, what that has done with the new polymerization scheme, as we've increased our formation voltage, we can actually get a higher breakdown voltage of the part. What does that mean to our circuits? Well, now we can develop 75 volt parts for these 48 volt lines. A lot of work being done here at the polymerization scheme level to introduce new products for these types of applications.
One of the big improvements was also with our corners and our edges, and this dielectric system, this slurry system improved that as well to reduce leakage. Again, we're always trying to innovate and patent ideas, and again, we are the market leader in tantalum polymer technology. Okay, higher temperature, longer life.
A lot of complex information on this page. For those of you that took chemistry in high school, oxygen and oxidation is typically not a positive. One thing that we've learned in producing polymer products is we need to cross-link these products. What we found in this experiment, when our products are sitting in high temperature air, a 175 degrees Celsius test, we learned that there's actually some breakdown of the bonds, chemical bonds.
What we developed is a special cross-linking mechanism that we use today in our slurry process. What that does is it prevents oxygen from getting into our polymer chains and breaking that down. What does that mean? Well, what that means is now we have a 150-degree Celsius offering that we can bring to the market, both for tantalum polymer, and also we have a long-life aluminum polymer product that we can bring to the market. Again, a lot of innovation on the polymer side, and it, you know, allows us to bring different products and better products to the market. Okay.
Again, not gonna go through this, but I just wanna say, again, when you think about Yageo tantalum, we've been in the business longer than anybody in the world, and we've got some really smart and bright people. I know Dr. Phil is probably on some of these patents, 'cause he's, that's where he worked for many years.
But again, KEMET's and Yageo is continuing to make advancements in tantalum polymer technology in all of the different areas, whether that's the coatings, whether that's the dielectric, you name it, or the polymer scheme, we are the market leader in this technology space. Okay. What does that mean to our customer? Those are the. That's how we're doing things, and then this is what our customer sees.
Applications we're going after, computation, again, that's AI servers, laptops, autonomous driving. Those are the solutions that you see down there. We have both commercial solutions and automotive solutions. Another key application for us is data storage. The enterprise solid-state drive, SSD market is a big market for us, continue to grow in that space.
We have those solutions, high voltage solutions and high density, high energy solutions in our T521 and T523. Of course, ADAS. You know, when you look at the autonomous driving, whether that's cameras or lidars, radars, we have a T597 and a T598, the high temperature, both small and larger case size solutions. Okay. Then our focus products, again, high voltage. I talked about high voltage, and you know, we filled it as well.
As we get into those higher energy and higher voltage rails for the AI server and AI PC, that's our T521 series. Looking at the high energy polymer series, really our T523, and that would be really geared towards that SSD market. Of course, over here for automotive, again, our A700 or A798, we look at that as well as our T598, for the AI server and the automotive market. Again, we are developing products and capabilities for these spaces where AI is prevalent. Okay. Future direction. We touched on some of these things today. Again, the goal is to increase capacitance by around 40% for some key 12- to 48-volt products.
We'll increase volt capacitance by about 100% for some of our lower voltage 3.3-5 volt products, then of course just getting more energy density and lower ESR for decoupling in the GPU and CPU, very important, and of course increasing our temperature ratings of our products. Again, as you've seen today, where AI is going and that high temperature on the board, again, that's very important to us. Okay. Again, thank you for your time today. Appreciate your support, and you know, the tantalum business is really excited about the future and what we can do to support the AI emergence. Thank you.
Go.
Thank you, Travis. There's so much more about our products that we would like to share with you, but time is limited, and we wanna make sure that we hear from our customer as well. I'd like to introduce Athar Zaidi, Senior Vice President from Infineon.
Thank you. I know that you guys have been here for a long time, and I will try my best to keep you excited and engrossed in powering AI. The drawback of going after excellent speakers like Dr. Lessner and Dr. Chris is I can't use the same statistics as they have provided. I have to pivot and provide a new flavor of the statistics that I also have, which is on the same lines that was previously presented. I want to give you a real-life example of what we are doing at Infineon Empowering AI and why energy density, efficiency, and robustness is important. My name is Athar Zaidi. I am taking care of power ICs and connectivity systems business line at Infineon, and we are powering AI from grid to core.
AI is transformational, depending on what estimates that you look into, but McKinsey projects that by the end of the decade, data centers will be consuming 7% of world's electricity. Just to put it in perspective, 7% is equivalent to the electricity consumed by India today. Very big number here. The grid is increasing.
The CAGR of energy increase in the grid is less than 3%, but AI is demanding more than 15%. Something gotta give, something needs to break, and we are here to make sure that nothing breaks because we are at the heart of powering AI that starts from the processor. Just to put things in perspective, a human brain consumes roughly 20 W of power.
It's the most efficient system that has evolved over millions of years. In the first 15-25 years of human life, we believe that intelligence is somewhat complete. If you take 20 W and you extrapolate it for 25 years, it's roughly 4.4 MWh of energy, which is consumed to get general intelligence.
If you look at GPT-4, which has 1.8 trillion transistors, that require 8,000 H100 to continuously use for 100 days, consuming roughly 13 GWh of energy, that is 3,000 times more than the human brain achieves in first 25 years of human life. GPT-4 is nowhere close to artificial general intelligence that we have.
We have a long, long ways to go in order to claim that we have won the race of AI. This is my pivot a little bit to provide the same statistics, but with a different flavor. Why is that so? GPUs are well suited to take care of AI workloads because inherently they are, they operate in a parallel mode.
With the start of transformers in 2017, the LLMs have become a reality. Just to put it in perspective, 98% of the data is dark. We don't know what to do with that because there is not enough compute in the world. Another statistic, roughly 80% of the data that human race uses today has been generated in the past 18 months.
These are very, very big, staggering statistics. These language models are evolving. GPT-4 is 1.8 trillion transistors. GPT-5 will be 20 trillion. Roughly, the amount of compute which is required to train these models is doubling every 3.4 months. That's why you go from Ampere to Hopper to Blackwell to Rubin, and this is just the start.
These things pose big challenges for us. One is the drain on the grid, as I talk about. The other thing is carbon footprint, because running AI servers is very energy intensive, and energy means carbon footprint. Energy which is consumed in compute is wasted as heat. You require energy to take the heat out from the system, so thermals is becoming a big deal.
The last thing is e-waste, because in CPU world, Intel and AMD used to launch a platform every three to four years. In NVIDIA and hyperscaler world, they are promising that there will be one platform each year. There is a big FOMO, fear of missing out. That's why trillions dollars are being expected to spend in the race of powering AI. Drain on the grid. By the end of the decade, 16% of U.S. electricity would be consumed in data center. 50% of that will be AI. 80% of that will be consumed by four big hyperscalers. That is the reason that next-generation data centers need to be built near the source of energy.
You must have seen the news last week that Three Mile Island nuclear reactor is being revived, and there is a lot of money which is going into cold fusion as well because data centers are energy-intensive. The country of Ireland is consuming 32% of the country's electricity in data centers, so no more data center in Ireland unless they do something magical.
Expected growth of data centers is a staggering 10,000 worldwide. Most of them are going to be in the U.S., so that's definitely a big deal. What is happening is the most important lever or the knob that we have is improving the efficiency in the power path and making sure that the power management solutions are thermal friendly.
We have to look into every watt which is wasted from the grid to the core, right? Right now, there is this problem is solved only in one dimension. I'll tell you what it means. In terms of CPUs, 250 W used to be the norm not too long ago. It went to 400 W. In next two years, the TDP power of a GPU is gonna be 2,000 W. On rack level, it used to be less than 10 kW. The GPU power has increased from 400 to 2,000 W. The rack power has increased by 10x from 10 W to 100 kW. There's even a discussion to do 1 MW on a rack. Imagine that. A big rack is acting like a big giant.
Your GPU NVL72 system has 72 of those running in parallel with extremely complicated software stack and networking fabric, and there's a discussion of putting 288 of them in a rack that will consume a tremendous amount of power. That problem cannot be solved with incremental innovation in silicon.
It has to be a disruptive innovation, not only in silicon, but packaging, magnetics, system design. It's a confluence of everything, right? This is a perfect storm which is happening right now. Fortunately, in Infineon, we have been on this journey for 20 years now, by innovating on silicon, now wide bandgap. We made an announcement a couple of days ago that we are building the world's largest fab, and we are going into 12-in with wide bandgap. That will be the backbone of powering AI. Power first.
Right now, when these GPUs are designed, they're talking about how many cores? Is it gonna be 4 nm? Is it gonna be 3 nm? Is it gonna be co-packaged optics? Is it gonna be chiplets? Is it gonna be CoWoS? Nobody's talking about power because they think they deliver the chip, and power guys like us will find a way to do this. This has been going on for 20 years, no problem, but we have reached a point where power has become the bottleneck. To an extent, when I talk to the hyperscalers especially, they say they can unleash the power in the processor, but the power on the board is not keeping up with that.
As Phil mentioned that a big amount of real estate right now, it is consumed in power management alone. As you get closer to the SoC or a CPU or a GPU, that is the most expensive real estate where every millimeter counts. That's where the innovation has to be done in multiple folds. One is silicon.
Today, we have vertical trench MOSFETs, and we are thinning them down to 20 micron. Just to put it in perspective, human hair is 100 micron thick, and we are having these MOSFETs which are thinned down to roughly 20 micron, which is not enough. We have to push the boundary. Now, we have to marry that innovation in silicon with innovation in packaging. That's where we are using chip embedding, where we do not have any inductance in the package.
There's no solder in the package, there's no copper clip in the package, there's no bond wire in the package because all of these things makes the package weaker, and things which are touching the SoC have to be absolutely bulletproof. They just cannot fail. Third thing is that we are innovating in magnetics by partnering with Yageo.
When we were on this journey of doing high density module, we scouted the entire world, believe it or not. For six months, we look for who can deliver the best magnetics and Yageo KEMET came to rescue, and I will show you what that part that I'm talking about. The compute problem cannot be one dimension, which is compute per watt.
It has to be absolute power, which is consumed on the system, and this is what we together have to do, is to drive the sensitivity of power first. PDN losses. Electrons need to be physically transferred from location one to location two. This is lossy, I squared R loss, that Phil talk about. Then power which is wasted, it manifests itself into heat. In order to take the heat out, you have to spend power. It's a double whammy in the system. We have to reduce the power loss at every conversion stage, and it has to make sure that the solutions that we come up are thermally enhanced and thermal friendly. Roughly, from AC to core, roughly 17% of the energy is just wasted in power conversion.
By having more than power semiconductors, whether it is silicon-based or wide bandgap-based, and innovating in system design and magnetics, we can recover roughly 8% of that power loss back. Increasing the power efficiency from 83% in the power path to roughly 91%. How do we do this? Conventional way is you have the GPU or the SoC, and you build the power all around it, what we call the lateral power. The examples about Hopper, which are shown here, is an example of lateral power, in which the power has to travel long distance. For every 1,000 W, roughly 100 W is wasted. I think this method is going to run out of steam fairly soon.
That's why we are working on what we call the vertical power. Roughly 10% power is wasted. You put the power on the backside, so you shorten the distance and you reduce the losses from 10% to 2%. The Holy Grail is you bring the power close to the substrate of the SoC itself, which means that you have to put almost 1 kW of power in less than 1.2 mm.
That's the space that we have. Making sure that the conversion efficiency is high enough, and you should be able to extract the heat out from the system. This has never been done before. Very, very big challenge for power semiconductor and system company like Infineon, and we are working on it.
If you take all of this and we take an example of 100,000 CPU node data center, there is an initial expense to do this type of power management, but the ROI is multifold. On a 100,000 CPU node, you can see that there is a TCO advantage of $30+ million that the customer can get right away.
We have come to a journey of that we have been together with Yageo Group for more than a year. We develop a state-of-the-art power module that can deliver 160 amps peak current, two phases. We have two version, 8 mm and 5 mm. We are working on a next generation one, which is only 4 mm.
I can tell you that this is the power module that I'm talking about. I am holding 100 W of power delivery between my two fingers. The plan is to develop a core version of it in the same form factor and deliver 280 amps. That is the next challenge that we are embarking upon. We are pushing the boundaries. Why? Because power density is the key. Power loss is a problem. It's a bad word. We have to extract as much value in the system as possible. By having the state-of-the-art magnetics getting combined with the state-of-the-art silicon and packaging, we have been able to reduce the solution height by almost 20% and increase the density by 30%.
We are the only one in the world which can proudly say that we are doing inductor on top. Our competition is doing power stage on the top because they cannot extract the heat out. When you do power stage on the top, and you put the heat sink on the top, that can take 30-60 PSI of pressure. ICs do not like to be pressed on the top.
The inductor here is also acting like a heat sink, so it's thermally much more enhanced compared to the conventional solutions that we have. All right, last slide. The goal is to improve the efficiency in the system, recover the 10% power loss, which is happening in power delivery network losses. Increase the power density because the holy grail is to have a vertical power delivery to have the VRM underneath the SoC itself.
Of course, these things cannot break, so it has to be robust, it has to be bulletproof. That's why we are building our own fabs and powering AI and working with the best in class here with KEMET Yageo to make sure that when customer put our solution on their board, they can sleep peacefully. With that, I end my talk. Hopefully, I kept you guys awake. I'm very excited to be here, and hopefully we can make wonders going forward. Thank you.
Thank you, Athar. I'm super excited about our collaboration together. With that, I'm gonna invite up our final speaker of the afternoon, from Cisco, Senior Vice President Marco De Martin.
Thanks. Well, what an exciting afternoon. Congrats, Pierre and Claudio and everybody from the Yageo team to pull this together. Definitely a lot of learning, I hope, for all of us today. I'm gonna try to take us home with a bit of excitement about how do actually all those component that you see today and all that technology gets consumed to ultimately deliver AI to our customer.
To start, I'm gonna talk a little bit about what is Cisco's strategy, because I think it's important to have that in context to understand how ultimately AI plays a role. If you go back to toward the pandemic period, a lot of company found themself effectively technology not ready to support what happened at that point in time.
A big role as part of our strategy is actually how do we help our customer modernize their infrastructure to be ready for the next pandemic, for the next challenge out there. With that, obviously, cybersecurity plays a major role into it. If you think nowadays, security and networking is intertwined, and we play a major role in helping our customer run a secure business.
Last but not least, AI and data. There is plenty of data in the network. There's plenty of data that our customer deal with it every day. Harnessing the power of that data through AI is definitely a big part of our strategy. With that being said, I don't think after everything you heard today, I don't think I need to be here on stage convincing you the AI will change the world.
I think, Pierre, you spelled it very, very nicely at the end of your presentation about the power of AI. Let me give you a couple of statistics, which I think are very important. We run this analysis once a year right now. It's called the AI Readiness Index, which gives a very interesting set of data point.
The first one that I want to call out is 85% of the people interviewed thinks that AI will drastically change the way they run their business. 97% of the people interviewed are a believer that they want in their business to move faster in terms of AI. The even more interesting data point is that only 14% out of the 97% believe they are ready to embark the journey of AI.
With that being said, think about the opportunities of everything that was discussed today in the room and the opportunity for Cisco in front of us. When you think about that and you think about going a little bit deeper, what is the AI strategy for a company like Cisco? How do we bring ultimately AI to life?
First and foremost is we are gonna deploy infrastructure to power AI, in terms of powering training model and inference model, both within hyperscaler, but also within enterprise companies. Now, AI brings a lot of opportunity, but brings a big challenge, and the big challenge is called security. That's another area, big area of focus for us. It's not just how we secure for AI, but also how we drive AI for security.
We live in a world ultimately where AI, data are actually fueling AI. The power of extracting data and the power of helping our customer through AI harness the value of data and really driving insight that ultimately helps driving business for our customers. We're also a big software company at the end of the day, and through AI embedded in the software that we deliver, we're helping our customer driving productivity, not just through the IT, but through productivity throughout their businesses. Last but not least, what we provide when it comes to AI is a set of services for our customer to really enable them to harness the full value, of AI.
Obviously, this is a big strategy, so you might be asking yourself, "Okay, how does that translate down to a data center ultimately, and what is happening, and what is Cisco doing in that regard?" Well, we have started the journey many years ago about deploying mass scale infrastructure within the hyperscaler.
Now, those are the first guys, if you want, that started the AI journey. Now, great data provided right before about how much of the data and power those guys are consuming. But the beauty of a company like Cisco that serves those, you know, those needs within the hyperscaler, we have been able to take the learning that we have there and actually now applying those learnings within enterprise customers that need to deploy AI.
We're enabling, you know, the infrastructure within the enterprise, which is a totally different need. They operate differently, they have different need, and they have a different learning experience around it. Then I'll talk in a second about how we're helping our enterprise customer to actually leverage the full value of AI.
Before I go there, I think it's important to talk about the two building blocks of AI, because a lot of time we talk about modeling, we talk about data. At the foundation of it, there are system that actually leverages all the component that we just talked about today. Cisco have a pretty large portfolio that covers AI. With our Cisco 8,000, we power AI within the hyperscaler. With our Nexus portfolio, we power AI and the infrastructure within data center.
Ultimately, we have a fairly large compute portfolio. What's very, very different between Cisco and all other vendors out there is that we don't just have software and hardware, we have silicon and optics. We're effectively the only supplier out there to all our customer that is able to offer an end-to-end solution where we have the four key variables available.
Now, when you talk about the enterprise vendor, there is another very, very interesting statistic out there. 85% of the companies that are starting to leverage user cases around AI have problems in the deployment of those. The reason why is it takes expertise, is a fairly new technology still, and is very costly, and people at time have no idea how to go about it.
The hyperscalers have mastered that, but everybody else out there is still learning from that journey. For us, looking at that need of solving this big problem for our customer, we have come out with a set of different solution. One is the what we call AI pods. This is effectively a validated solution where if you are a customer and you're starting that journey toward AI, we offer you what you could call almost like a reference design. You can buy the set of equipment from us, you can buy a set of equipment from other partners. You can plug it together. It's a validated design. We're gonna guarantee it's gonna work. Not only we guarantee that it's gonna work, but if you have problem, we also offer services around it.
Once you build those AI pods, the question becomes how do you connect them across the network, within the data center and across data center. Last but not least, we just announced a new solution which we're gonna deploy next year, which we call Hyperfabric. This is a totally brand-new solution.
This effectively gives you the capability through opening a software on your desktop to design your network for AI, choose the different pieces of equipment that you believe you need, look at different use cases. Once you basically have designed out what you believe you need, you can easily press a button, order the entire solution. The beauty is not just that.
Once the solution is gonna show up at your corporation, you basically, through a SaaS cloud environment, are gonna be able to basically power up that solution and manage that solution in a cloud environment. Think about how are we gonna be able to not just allow the technology, but simplify the technology for our customer. Because also everything that you heard today is pretty complex, and it takes simplification to enable our customer to actually do that. To just bring this home, when. Oh, sorry, just one more thing. This is to give me a lot of time, I get the question, How does an AI pod effectively look like?
It's nothing else, a stack solution where we play a big role with computing and switching capabilities, but we work with a bunch of other partners around the ecosystem. To bring this home, we build great technology, but we would never be able to do that without the best supply chain in the world and the best set of supplier in the world.
That's why I'm here today, because I mean, Cisco has done business with Yageo many, many years ago. I've been personally doing business with Yageo for the past over 10 years, and it's interesting the evolution of the company you are showing today is true. It feels b eing a customer to Yageo today feels very, very different than being a customer to Yageo 10 years ago.
The beauty of the level of portfolio you can buy, the technology you can buy, and ultimately, the transformation of being a truly global company today. Two examples I want to bring about the partnership between Cisco and Yageo. Here, the one on, I guess the left-hand side is once a year, Cisco runs a supplier appreciation event, which is well known across the supplier base. Cisco has roughly over 350 suppliers, and we give out about 10 awards every year. Last year, we gave an award to Yageo Group for the best quality in the industry. There is another big partnership we have through one of the key brands of Yageo Group, Pulse, which goes back to being a global company.
When we were trying to develop our strategy of building ICM outside of China, Yageo Group was the first one that came forward and enabled us to build ICM outside of China. We jointly went to India to build ICM, and that's one of the first times Cisco had an ICM built outside of India. With that being said, I'm closing here. Hopefully, you have enjoyed how we consume the technology. Once again, thanks to Yageo Group.
Thank you so much for your presentation and your endorsement as well. With that ends the speaker portion of our agenda this afternoon. Now we're gonna move into the Q&A session. We're now gonna do a quick break, so we're just gonna ask our team, our staff here to bring on the chairs. Then I would like to invite all of our speakers from today back up to have the panel session.