Ladies and gentlemen, introducing Founder, Chairman, President, and CEO, Charles Liang.
Thank you. Thank you. Thank you. Thank you. Thank you. Good day, everyone. Thank you for this opportunity. I'm Charles Liang, Founder and CEO of Supermicro. Thank you so much for coming today. What a wonderful and amazing world today. AI changes our world so much, making it so convenient, so fancy, so accurate, and so fast. From autonomous driving, to store self-checkout, to advancement in healthcare, ChatGPT, now we are entering a new world of fast growth and massive adoption for AI infrastructure. Let's brainstorm for the next hour together. Let's see what we can do to better our business, our life, and our environment. As you may know, Supermicro has experienced unprecedented business growth with AI in recent years, and we are just now starting to see the early sign of the tremendous benefits that generative AI training and inferencing are bringing to our world.
I guess some of you know that we were the first company to bring optimized GPU platform to the market with NVIDIA back in 2014.
Yeah!
Thank you. Since then, we have been one of the leading provider of AI system, offering the most optimized solution to many of the largest AI factory in the world today. Powering better AI, we are changing our life for the better. A better life leads to a happier mind. Happier mind is very important. A happier mind is need for us to do right things. Speaking of the right things, now, I'm here today bearing a great gift for the IT industry. A gift so great that data centers all over the world will celebrate like it's their birthday. I'm here to prove that green computing can be free, with big bonus! Yes, free with big bonus.
Many of you may be wondering, "Can this be true?" I will spoil your surprise and answer with a resounding yes, with a really big bonus, $tens of millions or even much more. Before going to the technical details, I'm happy and proud to announce some update on my personal mission of reforesting the Sahara Desert with Green Earth Foundation. Sharing the same objective as Supermicro's Green Computing Initiative, it's my way to give back to our one and only Mother Earth. We have started to propagate the most drought-resistant tree species in the Sahara Desert earlier this year as part of the Great Green Wall project, targeting a total of 10 million trees this year, and 15 million trees a year later. The foundation, the Silicon Valley headquarters, is also making good progress on micropropagation of these drought-resistant tree species.
On behalf of the foundation, I welcome more of you to join this meaningful action to make our Mother Earth greener. With that said, Supermicro has ingrained green computing DNA into the design of every solution we offer. For decades, we have been the leader in designing and supplying the most optimal and efficient building block of today's server storage infrastructure. We have just launched our new optimized X14 Intel Xeon 6 Sierra Forest- based, and soon, Granite Rapids- based platforms. X14 AMD Gen 5 Turin- based solution is also ready for early deployment now. These latest high-performance chip provide better performance per watt. but at the same cost, at a cost of rising TDP.
Together with our partners, we are on a mission to build a more sustainable data center infrastructure that address the rising need for more compute and storage in today's world, especially when it come to generative AI training and inferencing. The mission here is simple: Can we improve data center efficiency, decreasing TCO while provide higher performance with your new data center infrastructure? Supporting the latest AI development can be challenged through the traditional air cooling method, and especially they are expensive and environmentally taxing. My new concept, direct liquid cooling, DLC, have been around for more than 30 years. Unlike air cooling through air conditioning, requiring a substantial amount of electricity to operate, DLC, liquid cooling, can use room temperature, just room temperature water, to provide optimal cooling for server at a much lower cost and with a smaller environmental impact.
Our goal is to make DLC quickly become a mainstream solution for any data center and AI factory that focus on increasing efficiency and reducing OpEx. Here is what Supermicro DLC liquid cooling solution can do. Assuming a mid-sized LLM data center for GPT-4, which requires roughly 8,000 H100 GPUs or 1,000 HGX systems. Initial capital costs or an optimized DLC environment and cluster can save you up to 30%. Although 30% that is saving for the total cost due to the reduction of data center space, UPS scale, generator can be smaller, air condition scale can be much smaller, server racks, and other components. This can make the initial facility and system hardware acquisition costs about the same or lower compared with air- cooled, traditional air cooler solution, thus making green computing with DLC essentially free. Basically free.
You don't want to pay extra for liquid cooling. Due to lower energy consumption, by less air conditioning, DLC saving on OpEx can be up to 40%, achieving up to $60 million over five years, depending on location. This is the bonus, free. Saving $60 million may not be a really big deal and motivation for some big corporate. But through liquid cooling, we can reduce huge amount of CO2 emissions and make our planet greener. Doing this give me a happy mind all day, every day. You can do the same if you like, and please. Additionally, DLC computing will reduce power requirements, say, for the previous mentioned typical LLM data center. If air-cooled, it will need roughly 15 MW of power. With DLC liquid cooling, total, you will only need 10 MW to power the whole data center.
For cities and countries where power capacity is limited, this will open the door for a new class of data center. In the past, most of the customers were not interested in DLC because they assume, and it was true, that, number one, DLC have a very long lead time, you people know, roughly four months-12 months even in last few years. Number two, DLC was much more expensive. You people know that. DLC was not very reliable and difficult for maintenance. Number four, DLC was unproven for data center market. Indeed, in last thirty years, the DLC market share, liquid cooling market share, grew from 0% to about less than 1% in the last thirty years. Well, that's why Supermicro is here with you to revolutionize the industry together with our partner, including NVIDIA, especially.
We have spent day and night for the last three years addressing this customer concern. Finally, we now have improved on all of them and shipping DLC liquid cooling racks in volume production now. Plus, we have the scale. We promise to deliver DLC liquid cooling racks or data center scale solution in 2-4 weeks. instead of traditionally DLC, that you need to wait for 4-12 months. Second, we can build a DLC solutions at a lower cost than air-cooled solutions, as we proved earlier. Number three, our DLC solution is super reliable, with potentially high performance and much solid uptime. In most of the cases, maintenance cost is equal or even less than air-cooled data centers. Number four, we are targeting 15% of the global new data center development.
We'll be using our DLC solutions in the next 12 months, and hopefully up to 30% in the following years. Isn't this amazing? From 0% to less than 1% took us 30 years, and now from less than 1% to 15%, 30% in one or two years. Let's make friends who are earnest, capable, and wise. I'm telling you that DLC is a way or a future as a good friend. Let's be successfully together. So many good news. Maybe, we can take a rest now. So, because everyone excited, let me share a little bit more. Now, we have just established that DLC liquid cooling solution are a more efficient cooling alternative to air, air conditioning. But it's no surprise that they require more complex integration into data center infrastructure.
As a typical air-cooled data center, like maximize out with 20 or 30 kW, Supermicro DLC solution can be 80-100 kW product right now. This allow new data center development to create more powerful AI system to support the latest AI application and software in the fraction of the space. Currently, we are able to ship 1,000 racks or an industry-based DLC solution per month. As of right now, our USA campus is shipping up to 50 DLC rack daily to serve some of the world's AI leaders. Oh, what's the logo? One of our great partner, X, is currently deploying the largest and highest performance AI LLM cluster in the world. Although DLC is still at less than 1% total market share, last month, now it's different, 'cause we start shipping few weeks ago.
Although DLC is still less than 1%, right? Our goal is to help the industry to reach 30% DLC adoption in the next year or two. Oh, from 1% to 30% take one or two years. This will create another big hockey stick growth curve. It costs the same or less than comparable air conditioning solution. It offers a big bonus from the TCO saving, and it reduce CO2 emissions. It will preserve billion of trees. I love that. We have more than 10,000 engineering meeting over the past year on this new DLC technology, especially appreciate to our hardworking engineers and great partner. I'm so excited that DLC is finally in volume production today. DLC liquid cooling can be free with big bonus. Let's be green computing partner together. For the love of AI, we are fully invested in DLC liquid cooling.
AI is so powerful and changing our life. I'm sure we still have many questions about it. Will AI someday control us? What more powerful AI chip are coming? How many powerful platform will become soon? Or even, what's the amazing future of AI? I know, but I know only some. Fortunately, we are very lucky again to invite the AI genius, our common friend, see, our common friend is very busy, huh? NVIDIA Founder and CEO, Jensen, to share his great vision with us.
Jensen, thank you.
Hi, everybody. Now what?
Jensen, the world of AI is changing, minute by hour, I think, because of you. What's new today?
I have to admit, just now, when I was coming to your keynote, in the car, I fell asleep. And so
Don't overwork hard.
So right now, right now, I'm a little bit groggy. So if I say nonsense things, please, let me apologize first.
No problem.
Well, let's see. Charles, we've gone back a very long ways.
Yeah.
What are we doing?
Oh, I needed some water.
Okay.
I need to speed up
Okay. All right.
My energy.
Yeah. They said I was on this side, and you keep going on my side. This is what happens when we don't practice.
You don't need to. And you have no time at all.
This is a very important time because we have a new AI computing coming. There are two things that are happening at the same time. The first is accelerated computing. Accelerated computing has arrived at a time. Oh, green computing.
Yeah.
Is a green computer.
Yeah.
Okay. Green computing [Foreign language]
We bank on you.
I think, I think when you say green computing, you mean energy-efficient computing, right?
Yes.
NVIDIA is energy-efficient computing.
Yes, we have same vision.
Yes.
We follow you.
All right. Look, green computing and green computing. All right, so, accelerated computing's time has come because for a very long time, the amount of data processing has been increasing exponentially.
Yeah.
And yet, CPU scaling has slowed for many, many years. So we have now an enormous amount of waste: wasted energy and wasted cost trapped inside the data centers. So when we accelerate the data centers, the savings incredible because it has been so long of waste trapped. And so now we can release the waste and use that energy for a new purpose. Number one, accelerate every application, accelerate every data center, these amazing servers here, right?
So many new products.
Many new products. You have 220 new products. Unbelievable. Did he tell you that already?
No wonder.
Supermicro.
I have to work very hard every day.
I came to announce Supermicro's products. That's the first thing. The second thing is because the energy efficiency and the performance efficiency and the cost efficiency is so incredibly great with accelerated computing, a new way of doing computing has emerged, and it's called generative AI. Generative AI is an incredible thing. People say generative AI inference, it's related, not the same. Inference: recognizing cats, dogs, speech. Inference. Generation: text generation, image generation, video generation. That's what we call a generative AI. The pressure of generative AI to. Not the pressure, but the transition to generative AI will affect every single data center in the world. We have a $1 trillion worth of data centers in the world that's established. $3 trillion, probably by 2030, in another six years. We have to modernize all of them with these amazing systems.
Yeah.
That's the reason why the demand is so great, because all of these data centers has to be modernized. Charles and the Supermicro team is ready to take your order.
Jensen
I'm your best sales guy.
Thank you.
I work on commission.
No commission. We will buy more chip from you.
Don't buy more chips.
With the liquid cooling. So that's how it goes. Jensen, Supermicro is now shipping data center liquid cooling DLC rack in volume production now.
Yeah.
to lower the power consumption.
Yeah.
You can manufacture more AI chip.
Yeah. Yeah.
Thousands of Hopper here, you see?
Oh [Foreign language]
Yeah. [Foreign language]
I have many American colleagues. They don't understand my Chinese. I have many Chinese colleagues. They don't understand my Chinese. [Foreign language].
We are shipping up to 1000 rack per month now.
1000 ?
Rack. Like this.
Multiply by ASP?
Yeah.
You're going to be a gigantic company!
Yeah. Thank you. That's why I need more chip.
Did you guys all do the math? Millions times thousands, times 52.
No, no, no, you're kidding me, $2 million. More than $2 million per rack. Only.
Are we allowed to do this on TV? Are we on TV?
I guess, in a way.
Oh,
We are shipping about 1,000.
That's incredible. Now, this, this, 600,000 parts, this is probably more than 600,000 parts.
I think.
How many pounds?
Oh, I don't know.
[Foreign language]
[Foreign language]
I think it's 3,000 pounds, more than 3,000 pounds.
Roughly.
Yeah. It's incredible. So
Yeah, our goal.
Incredible
This year is to ship more than 10,000 rack like this.
Yeah. You know, Charles, this is the thing that's really amazing. People think that we're building GPUs. You know, GPU is a chip. There are 72 chips in here, and then there are 600,000 other parts. It's 72 chips, probably weighs one pound. This is 3,000, 2,999 other pounds. So the amount of technology that's inside one of these racks is really quite extraordinary. This is a technology marvel, the most like, most, most complex
Yeah
Most advanced computer the world's ever made.
Yeah, exactly the best in the world now.
Yeah, absolutely incredible. The software that it takes to run this.
Already.
Is unbelievable. Yeah, unbelievable. Isn't that right?
Yeah.
And so I think that people now are starting to realize that when we say GPU server, of course, the brain is the GPU.
Yeah
But the system is much, much more complex than that.
Yeah.
Supermicro does amazing engineering.
Thank you. [Foreign language]
Huh? [Foreign language] Okay, then.
We, there are some Americans here.
Then this year, we are going to ship, hopefully, next year.
When we're together, sometimes we speak Taiwanese, sometimes we speak Mandarin. Then when we disagree, we speak English.
We try to make a DLC market share from 1% to 15%.
Mm.
This year.
Wow!
Save lots of power
Yeah, yeah
For your GPU chip.
Yeah, yeah. The energy efficiency is so much better.
Oh, so much
The cost to the data center is cheaper.
Cheaper.
That's right. People don't realize this. Liquid-cooled systems eliminates an enormous amount of cost in the data center.
Yeah.
So that you can use that waste, capture that waste, and put it into computing. In the future, in the future, computing throughput is revenues because it's token generation, and token generation is $ per million tokens. Just like energy, $ per kW hour, we have now invented a new commodity. This is a very important idea for all of you. This is a new commodity. It has value, and the faster you can generate it, the higher throughput, the greater utilization, the higher your revenues. It is absolutely true.
Yeah
And it's directly measurable. That's why this is a factory, not a data center. That's why this is a factory, not a file server. It's not a retrieval of files. It's not used for exchanging emails. This is directly generating revenues for factories. That's why we call it AI factories.
And so powerful, and only $3 million. Billion dollar. Billion.
Okay, so $3 million, and you can generate, who knows how much revenue per year, right?
Uh
Oh.
3 million, 1,000, and every year I have how many months? 12?
The return on large language model generation, token generation, is going to be very, very good.
Yeah, will be huge.
The reason for that is because the token embeds intelligence.
Yeah.
The intelligence could be used in so many different industries. So the future is very important. It's time to start up.
Yeah
Time to start up. Throughput.
Yeah
Utilization all matter.
Yeah.
Reliability has revenue implication. Throughput has revenue implication. Startup has revenue implication.
Yeah.
That's why it's so important that we integrate the whole system into a rack scale.
Yeah.
Get all the software working, connect it to all this, all the networking, so that. We build all of our own data centers. We build our own supercomputers, so that we know when you install this, when you install Supermicro in your factories, the startup time will be extremely fast. Your utilization will be extremely high, and your throughput will be extremely high because your revenues depends on it. Factory output is measured by all of those factors.
Yeah.
Very complicated.
Yeah, and all of those rack are NVIDIA software devices, all certified. So customer
I love the sound of that.
Plugging the cable.
Yeah
And they can run the applications
And it runs. That's right. And all of the NVIDIA NIMs, all of the large language models.
Yeah
It just runs on all these systems.
Yeah.
[Foreign language]
[Foreign language]
[Foreign language]
[Foreign language] We are shipping thousand, right?
Still very handsome. [Foreign language]
[Foreign language]
[Foreign language]
Yes.
Yeah. [Foreign language]
[Foreign language]
[Foreign language]
[Foreign language]
Very beautiful. Charles, Charles said that this is everything. Everything in here is NVIDIA. For all the American citizens there.
From Supermicro to Edge AI, everything.
Wow!
All NVIDIA software.
All green computing.
All green computing.
All green computer.
All green computer, all this cooling support.
That's fantastic. [Foreign language]
Good. [Foreign language] . Let's go through some detail.
[Foreign language].
[Foreign language].
[Foreign language].
[Foreign language].
[Foreign language] Oh, okay.
[Foreign language].
Okay. Okay.
H100, H200, B100 for you this cooling.
Wow!
Shipping in volume now.
Wow.
This one, your B200.
Uh-huh.
Fully ready.
Beautiful.
For your chip.
Beautiful. Beautiful.
This will be how many time faster than this?
So we have for Blackwell, Blackwell has air-cooled, liquid-cooled, x86, Grace, NVLink 8, NVLink 2, NVLink 36, NVLink 72.
Yeah.
So many different configurations.
Yeah.
So that depending on the type of utilization, type of use case you have, the type of data center that you have, Charles is ready to serve you immediately, right?
Immediately. He doesn't need a chip.
Yeah.
On one hand we got your chip, second hand, we ship the customer.
Wow! Thank goodness, we only need two hands.
In two weeks. In two weeks. Thank you very much.
That's incredible, and all of it software compatible.
All.
This is really, this is really an amazing thing.
Certified.
Literally, everything here is software compatible.
100%.
Yeah. And software, as we know, is the most complex part of high-performance computing.
Yeah, thank you for those great offerings.
Yeah.
They are all ready to service.
Yeah
Our customer.
There are three very important software stacks that we have in our company, that everything is built on top of. The first, of course, is CUDA, is very famous. The second, for all of the networking, because networking is just not networking.
Oh, networking.
Networking today.
Yeah.
Networking today is a computing fabric. Networking today is a computing fabric.
InfiniBand.
Not just for sending email to each other.
InfiniBand, or 400, 800 MHz or GHz.
MHz? This is not 1980s.
[Foreign language] GHz, and
Which only MHz
[Foreign language] KHz .
GHz.
GHz. Yes. 400 Gbps, 800 Gbps.
Yeah.
And then, of course, next generation coming is 1600. But the important thing is, all of the software that we have that runs on the networking for distributed computing is on top of two software stacks. One is called DOCA for the NIC, NCCL for the fabric.
Yeah.
It enables us to distribute the workload across the network.
Yeah
Very, very efficiently.
Very good.
Because Ethernet was not designed for high-performance computing.
You make our job easier, but still very busy, because you have so many great product.
My job is to help give you job. And because you do such a good job, it becomes, gives me job.
Oh, don't forget you.
[Foreign language]
[Foreign language]
Yeah, yeah. Yeah, yeah. [Foreign language] ?
[Foreign language]
[Foreign language]
Inside
[Foreign language]
Inside here.
This, this is an incredible, incredible system. In fact, in fact, these chips are all connected together using high-speed interconnect, the world's fastest SerDes. The SerDes is incredibly fast and very energy efficient, and so we can connect this Grace CPU to dual Blackwell GPUs. And that's very important because in the training stage, the memory system of Grace could be used for checkpoint restart. Checkpoint and restarting is very important for high utilization and high uptime. And so checkpoint restart could be stored in the system memory. That system memory is very low energy, very low power, and the link between Blackwell and Grace is very, very high. Second, during inference time, as you know, there's a concept called prompts. Context, in-context training, prompting. That prompt memory, that context memory, is right here. This is the memory, the thinking memory, the working memory of AI.
This memory needs to be very high performance, very low energy. During training, we have good use for Grace CPU.
Yeah.
During inference, we have excellent use for Grace CPU.
Yeah.
The interconnect is very, very high speed, very low power.
Fully optimal.
The benefit is because we compress so many in one system.
Yeah.
If we save 20 W, 50 W on the interconnect.
Yeah.
You multiply by the whole rack, then we can take the energy and use it for computing.
Yep.
Energy efficiency translates to higher performance.
Liquid cooling, too.
That's right. Green computing.
[Foreign language]
Huh? [Foreign language] I am a Supermicro employee.
[Foreign language]
I'm Supermicro employee.
[Foreign language] Will AI control us?
Of course not. We, we have to, we have to. The most important thing, of course, at the moment, is we have to make AI work well. Right now, AI is, of course, working extremely well, and in many applications, AI has become good enough. To good enough to become useful. It has achieved the plateau of good enough, very useful. However, we want it to be incredibly good. We want it to be very functional. Everything from guardrailing for, fine-tuning, skill learning. There are many different things that we still have to improve, okay? So we know that AI is. AI still has long ways to go.
Yeah.
That's job number one, is to advance the technology. At the same time, we have to advance safety technology. As you know, the planes that we all flew on to come here has autopilot, and autopilot is automatic technology. In order for planes to be safe, a great deal of technology had to be invented to keep the plane safe.
Yeah.
Also, practices to monitor the planes, air traffic control. Other planes monitor the planes, pilots monitoring each other, many different ways to.
Yeah
Keep AIs keep autopilot safe. In the future, we'll do the same thing with AI. There will be AIs that watch AIs. There are people that watch AIs. There's guard, right, guardrails that keep AI guardrailed. And so there's gonna be a whole lot of different technologies we need to create for safety, technology for safety.
Yeah.
Then third, of course, we need to have good policies for safety. Good practices and good policies for safety. Talking about it is very important, so that we can all remind each other that we have to do good science, good engineering, good business practice, good policy practice, good industrial practice. All of those things has to advance.
So perfect strategy. So the conclusion is what? The more you buy, the more safe.
The more you buy, the more you save. The more you buy, the more you save, yeah.
Thank you, Jensen.
Thank you.
Thank you so much. Good job.
Thank you, everybody.
Thank you.
Thank you. Thank you. Thank you.
Thank you.
All right.
Thank you.
Have a good show.
Thank you. So now we have talked about liquid cooling at a system level. Let's dive deeper into liquid cooling at a rack scale. I would like to invite our Senior PM, Nelson Wang, and CW from our liquid cooling team to tell us more about it. So, Nelson, can you please share with the audience the key benefit of our total plug-and-play DLC rack scale solution?
Absolutely, Charles. I'm very excited today, because we just talked about it. Mid-size LLM data centers host 8,000 H100 GPUs or 1,000 of our HGX systems. Switching to Supermicro direct to chip liquid cooling will be free. We mentioned that, right?
Yep.
And with a bonus. So when we're talking about compute and air-cooled deployments, we, Supermicro, makes it easy. The power demands of the generative AI workloads are tremendous. After all the new power-saving technologies, data centers are still running out of power, and we know that it can take years to make sure to build a new data center. So what's the upside here? Right? By switching to direct liquid cooling from traditional air-cooled configurations, our customers are enjoying 33% power savings, and that's huge. All right, Charles, we mentioned this last year, right?
Yep.
We mentioned. We talked about this last year, and I want to update the audience today. I have one more thing to share. I'm proud to share that we've delivered on our two-week rapid rack delivery promise. This unprecedented speed and deployment ensures that the data centers can scale rapidly and meet the growing demands.
DLC
DLC
From Supermicro. Instead of four months or 12 months, now, this guy promise two weeks.
We promised together.
Two weeks.
We're in it together. So now I'm super excited to share some more great news, that now we're extending that same promise of two weeks to liquid cooling rack solutions delivered at scale. We are now already shipping a volume of over 50 liquid-cooled racks per day. We're doing this. And we're not just dabbling in liquid cooling. We are dominating liquid cooling at rack scale solutions. Thank you.
Thank you, Nelson. Josh, what do you want to share?
Thank you, Charles.
What's the good news?
Thank you. I am happy to share this news, to show you all how Supermicro is leading the landscape of AI solution in 2024. First off, to date, Supermicro has shipped over 500,000 NVIDIA GPUs. Now, we actually did the math on that. This new computational magnitude eclipses the combined computation of today's world's top 20 supercomputers. Now, allow me to demonstrate some of our cutting-edge products that was already glamorized right over here. So I'm gonna actually start with this one here. This is actually built for versatility, a 32-GPU rack. Now, it can support the NVIDIA H200s, like we have configured here, or you may choose to go with the AMD MI300X or even Intel Gaudi. We make the choice available to you.
In addition, we can deploy this in your data center, air-cooled environment, or if you're ready for that, free and bonus with a liquid-cooled environment, this can do that too. Now, this, I'm about to show you, is what a deployment with 256 GPU cluster air-cooled looks like. With our latest, most glamorous, handsome Supermicro rack here, liquid-cooled density allows us to achieve deployment with half the compute racks, giving you a brand new level to scale out your AI data center, supporting you all the way through to a new magnitude of deployment. Now, we're not done yet. Complementing our entire solution is going to be our liquid cooling tower, as well as our SuperCloud Composer software.
Our latest iteration is going to allow you, in one single panel, to look at and monitor your entire data center from the chips in each one of these GPUs with sensors, all the way down to our liquid cooling tower, all in one pane of glass. From. This is Supermicro. We're advancing AI solutions, and the future of technology is here. Thank you.
One PO, one PO from system, rack scale, DLC, water tower, management, all from one-stop shopping, and delivery time can be 2-4 weeks. We change. We change the liquid cooling industry. CW, please.
Thank you, Charles. Our warm water liquid cooling system not only improves thermal management, but also reduces the environmental impact compared to traditional chillers. By reducing the need for the chilling, we redirect the energy from cooling to computing, enhancing performance and offering significant energy savings. Our DLC solutions efficiently utilize warm water up to 45 degrees Celsius to dissipate the heat up to 90% of the heat generated by the rack through liquid cooling. This optimize thermal management, enabling enhanced hardware performance and the computational throughput. By reducing the need for chilling, again, we redirect the energy from cooling to computing, enhancing performance and offering significant energy savings of up to 40%. Supermicro DLC rack solution double the computing density compared to air cooling.
This solution supports the highest TDP, GPU, and the rack densities, while offering heat reuse for sustainability, aligning with green data center initiatives and the ESG goals.
You work very hard, no?
Yes.
Last three years already.
Thank you.
Thank you a lot. Thank you a lot.
Yes, thank you, Charles.
Thank you. Thank you, guys.
Thank you, Charles.
Thank you. Thank you very much.
Thank you.
Looks like going for DLC has everything to gain and nothing to lose. Our DLC total solution, including system, rack, PnP, water tower, data center monitoring, management tool, a SuperCloud Composer. These flawless and mature total solution are helping customers speed up their DLC data center readiness. Supermicro also provides on-site deployment, maintenance, and service to make it easier for customers to go for DLC data center quickly. Now, earlier data center customers go for DLC, green computing, the more savings. Again, green computing with DLC can indeed be free, with a big bonus. Thank you all.
Thank you, Charles. Thank you.
Before, so many company hastily go to liquid cooling. But liquid cooling, oh, too long, too expensive, not reliable, maintenance, da, da, da, da, da. Now, NVIDIA service us, help us so much, and we are able to help you every angle, every angle. So 2-4 weeks, we can go for DLC together. So we, with recent expansion U.S., Taiwan, and Netherlands facility, Supermicro have very large server and rack production capacity to supply and service worldwide data center deployments. Our new Malaysia campus will further boost our already large capacity and support even more new business opportunity. Currently, we have the capacity to design and deliver up to 5,000 rack per month. 5,000 per month only. Jensen say, [Foreign language] . Being an end-to-end total solution provider, we must give some love to the client side as well.
Let's switch gear to talk about our latest edge AI and telco solution. Here is Molly to share with us. Molly, James Oh, say hello.
Thank you, Charles. Thank you, thank you. Supermicro has focused on edge computing for more than 20 years. We leverage our building block solutions to extend our design know-how from data centers to the compact edge form factors, including the 2U multi-node systems, and also for the telco virtualized RAN and 2U system support data center GPU and AI at the remote retail applications. Our new X14 Hyper-E compact systems support AI inferencing by integrating up to 3 double width GPUs. This optimized configuration has been adapted to accelerate AI in retail for virtualization services, frictionless self-checkouts, and also the anti-theft security features. Supermicro telco and outdoor edge systems unlock great opportunities to bring more edge AI to more places.
We have our outdoor edge systems with an IP65 rating, prevent water and dust challenges, running in the wide temperatures, lower minus 47 degrees Celsius, up to the 47 degrees Celsius as well. And this, running at a high core count CPU with a multiple enterprise level GPU. Now, let's accelerate AI inferencing together. Thank you.
Thank you. This hour seems too short, huh? As we have still so many things to share. Let me summarize what we have gone through today and bring this to a close. Oh, the COVID, I like this COVID. 30 years, DLC from 0% to last month, still less than 1%. And now, with DLC technology, already from our engineering team and our partner. So we hope we can grow DLC market share from less than 1% again, to 15% in 1 year, hopefully 30% in two years. So it should be right, should be right. Same cost, both traditional air cooler. Big free TCO bonus. A data center can save up to $60 million, for example, and reduce CO2 emission. Preserve, this is equal to preserve billion more tree for our planet.
Supermicro is shipping 1,000 DLC racks monthly now, and will be up to 2,000 very soon. Customers need to order now because the lead time is getting longer. There are so many backorders. Green computing with DLC can be free, like what we just proved. With a huge bonus, other than $10s or $100s of millions in savings. The biggest bonus is your happy mind, knowing you are saving our Mother Earth, the only one. Let's go for green together. Thank you so much. Thank you. Appreciate.