Okay, thanks everybody. My name is George Wang. I'm the semi analyst for IT Hardware at Barclays. It's my pleasure today to welcome Supermicro management, CFO, David. So before I start, I just want to read this safe harbor statement. We request investors to visit Supermicro IR webpage for the safe harbor and the cautionary statements regarding financial history, and the projections, and the reconciliation of GAAP to non-GAAP measures. So with this out of the way, maybe just to get started, kind of, David, can you kind of run through quickly, kind of high level, you know, Supermicro story really quickly, and kind of what you bring to the table, kind of differentiate versus other ODM, OEMs?
Sure. So Supermicro started out as a motherboard company, 30 years ago, and a lot of people might ask: Well, how does that help you today? And the answer is that with the new technologies coming out, the demands on the motherboard are greater than ever. And so Supermicro is in a unique position to be able to quickly redesign the very best motherboards. We started by selling motherboards to Citibank for use in their ATMs, and so this helped us. Later, we started to develop our own chassis and our own power supplies, our own back planes, and we eventually developed our own servers, server solutions. That was actually Supermicro, we call it Supermicro 2.0. That was when we did an IPO back in 2007.
We continued to add storage and eventually now, Supermicro 3.0, which is racks. Racks really, really embody the ability to control the customer experience with plug-and-play. This gives us the chance to sell a complete system with all of our technology and all of the technology of the component, obviously, the components that we use inside to control the experience.
Got you. That's super helpful. So maybe you can kind of run through the key nuances with the sort of server market. You know, I think, some of the investors may, you know, misunderstood the story, kind of used to be a lower margin business. Now, you know, you have more complexity, kind of, you know, the faster delivery to the market. So maybe you can talk about kind of key differentiation and kind of key modes for Supermicro.
Okay. So Supermicro's strength, its moat, is the fact that we are approximately 50% engineers. So at heart, we are an engineering company, and we're led actually by one of the best, you know, engineers, you know, we believe in the world, in Charles Liang. And so that fact is helping us now because as technology development has sped up with that, because now previously you had mostly, you know, one platform to use, but now you have multiple platforms with, with, you can offer with NVIDIA, AMD, and Intel, ARM, you know, solutions, as well as others. I'm talking about, you know, DDR5, VGI, PCIe Gen 5, CXL.
So there's a lot of emerging technologies, and this is really playing into Supermicro's model, which is our Building Block Solutions. And our building, what we call Building Block Solutions, is that we architected the server technology from the ground up. That is, we designed our own chassis, we designed our own motherboards. So we built all of the pieces of the server, designed those in-house, and that gives us the ability to quickly incorporate new technologies. It also gives us the ability to customize solutions, which is really our forte, customizing solutions for companies' specific applications.
And this differentiates us from, you know, both the ODMs as well as the other tier one server manufacturers, who try to offer a select number of SKUs to address all of, you know, a broader part of the market. Whereas we go in and we design something unique for a customer that not only gives them the very best, you know, cost performance metric, but it also gives them the lowest total cost of ownership because we have designed our servers to be low power consumption and to manage heat.
This also helps us now, because now with GPUs going past, you know, generating past 1,000 watts and dissipating 1,000 watts of heat, and CPUs dissipating in excess of 500 watts of heat, heat is becoming more important. So this, our ability to provide liquid-cooled solutions, and what we call green computing, or you know, the lowest amount of heat dissipation, allows us to have a competitive edge.
Yeah, that's a good segue to kind of double click on the liquid cooling. You know, we still think it's sort of underappreciated. You know, so they along with so that you guys being a pioneer in the rack scale, plug-and-play, but also kind of the first mover in the liquid cooling as so they increase the penetration into the modern data center. So can you kind of elaborate on the penetration, you know, you guys go through, throughout kind of 20% data center usage for liquid cooling, just especially with more constraints on the power and heat going forward? So maybe I think that should continue to contribute to share gains for Supermicro.
Yeah, so we try to promote, in trying to promote, green computing, we've tried to promote liquid cooling because we believe that a customer can save up to, you know, 40% of their operating costs by utilizing liquid cooling. They can also save on CapEx, because if you have a data center, if you're building a data center, you're gonna need a lower powered chiller, you know, in order to reduce temperatures, and you're also gonna save. So that'll save you CapEx. You can also save on OpEx by not having to run the chiller as often if you're using a liquid-cooled, you know, direct-to-chip or, you know, liquid cooling solution. So, or other, besides direct-to-chip, we actually have three or more types of solutions.
So we think that liquid cooling is the future, and we think that, you know, as the new GPUs go up over 1,000 watts, we think that, you know, liquid cooling will start to resonate with more customers. And we've estimated that up to the next couple of years, 20% of data centers will employ liquid cooling. You might ask me, why isn't everybody using liquid cooling right now? Well, number one, it's more expensive, and so some companies just decide: "Okay, we'll just, you know, run our electricity bill a little higher. It's okay.
And I'll have a lower CapEx to take to my approval committee." Number two, some customers believe it'll take longer, you know, to build, which in some cases is true. You have a couple more steps in order to add the liquid cooling, so features. So it is gonna take a little bit longer. So those are kind of impediments to, you know, to adoption, but we believe that that's where the market will have to go. And we even wonder if warranties will start to come into play by not...
We don't know that at all, but that's one possibility that has been speculated, that warranties will start to come into play, as the heat becomes a more important part of server technology.
That makes sense. So as we take a step back, what are the top pillars of your strategy, so the over the medium and long term, the investor should care about? And as the calendar flip into 2024, what are the top three or four, so the, you know, initiatives you guys are working on right now?
Okay. So in terms of, you know, the top pillars, we first of all, we believe in what AI is gonna be able to provide. AI doesn't, it doesn't help every workload, okay? So we still have customers that have very compute-intensive app of workloads that don't need AI. And for instance, we sell to a large semiconductor company which isn't using GPUs in their, you know, in their compute because they're doing, you know, EDA. And however, we also have companies that are, you know, like, that are using autonomous driving solutions. We have customers that are streaming videos. We have companies that are doing online gaming.
They're doing, you know, different graphics-intensive applications. They're gonna be the users of AI-based solutions. And right now, you know, I think it's Accenture that has stated, if a company hasn't defined its AI solution, if they're not defining what their AI platform solution is gonna be, you know, they, they risk going out of business. So a lot of companies are right now taking a look at what they, what they need to do with their AI strategy, and, and that's... We think that's gonna be a predominant theme in the, you know, in the next couple of years.
Gotcha. Yeah, just, just to stay topical, kind of, you know, talk about next generation platforms, kind of, you know, aside from the, the traditional kind of NVIDIA, you know, H100, you have, you know, AMD, you have Intel, you have other kind of, you know, the, the hyperscale, kind of, in-house ASIC ramping up. You know, obviously, you know, AMD had a bigger event yesterday, kind of, MI300, and, you know, threw out the bigger TDP number, you know, much higher than people expected. So maybe you can double-click on, so, you know, kind of, a newer, most of a chip supply into next year as you guys work with all those guys.
Yeah. So on the supply side, you know, the supply is improving. So there's... It's gonna take time, you know, because there's still, besides the, you know, the GPUs and the CPUs, there's also other silicon-based components, you know, add-on cards and network interface cards that rely on silicon. And as we all know, there's additional fabs that are being built in Arizona, Ohio, New York, over in Europe as well, by TSMC and Intel and Micron. And so these are gonna bring new silicon to market, and that's gonna alleviate some of the long lead times that we, you know, that we currently face.
However, what we've seen is supply getting better, and so we're encouraged by that, and we think that, you know, we've had, we've had, we believe our, what we know, our revenues have been constrained by, you know, by supply... you know, and it's kind of a funny, an interesting thing. We were, we were constrained during the pandemic by supply for different reasons. There, those were logistics based, you know, the ability to get ships, you know, in and out of Long Beach, Oakland, and Los Angeles harbors. Now, we got out of that.
We came out of the pandemic, and we went right into a different kind of constraint with, you know, constraint for silicon, as well as for, you know, for GPUs because of demand.
That makes sense. Maybe you can sort of pass out kind of some of nuance with the training versus inference. You know, obviously, inference right now still a smaller mix, but, you know, so they forecast to be much bigger growth come going forward as you have more applications. You know, L40S is probably kind of targeted for the inference use cases, you know, also kind of ASIC in the CSP, kind of in-house chips, maybe, you know, more of the inference versus so the H100, H200 could be more training. So I think, you know, Super Micro is sort of, you know, agnostic on, on, on those different workloads. So maybe you can sort of double-click on that.
Yeah. So, we try to bring the very best technologies together in our solutions, and so we don't use any one technology exclusively. We're gonna bring about, number one, you know, what we think are the highest performing solutions for a particular workload. Now, if a customer has a preference for one vendor over another one, we're gonna follow that preference. But, we largely will recommend to a customer, you know, the very best, you know, top-performing solution for their particular application, whether, you know, whether it's, you know, online gaming, online payment processing. We have zero-based, zero trust-based cybersecurity providers, that we have companies that are doing online, you know, global cloud-based storage solutions.
So, whatever the application is, we're gonna bring the best, the very best technology. But, George is correct. We believe that the market will be moving over from training to inferencing. So initially, say we were selling to autonomous-based driving companies for training as they were looking at images and training their systems on how to identify objects. And then later, that will move to inferencing, you know, where they're sending that information out in order to run in their network. And so we think there are some really good inferencing, like, you know, like the L40S, there are some really good inferencing solutions that are coming out, and they...
Those have particular, you know, desire or their particular, appeal in certain workloads.
Gotcha. Then maybe you can talk about the capacity expansion. And obviously, with, supply improving week by week, you know, in order to better meet, fulfill on the customer demand and backlog, you kind of want to eventually increase the capacity. So it's nice to see kind of improving from 4K, rack scale per month to 5K next year. So maybe you can talk about kind of, you know, capacity expansion and eventually kind of Malaysia coming on kind of latter part of next year.
Okay. So we've grown last two years, well, we've grown at a CAGR of 42%. We moved our operating margin up 700 basis points, and we moved our gross margin up 300 basis points. We doubled our EPS the last two years, going from in fiscal year 2021 $2.58 up to $5.65, and $5.65 to $11.81 on a non-GAAP basis. So this kind of increase in demand, which was largely led by AI, because we were first to market with a complete set of AI solutions.
In fact, our CEO developed a supercomputer for, you know, for Jensen Huang, the CEO of NVIDIA, about over 10 years ago. And this was actually released in an interview that was done by Jensen. It's out on YouTube, about thanking Charles for developing that system for him, that AI-based supercomputer for AI applications. So we think that this growth is; it will continue over our company, and that's why we have developed additional capacity over in Taiwan.
I was over in Taiwan back in 2019 when we were breaking ground on a new facility there, and which is now, say, 40% utilized. But we're adding in Johor, Malaysia, which is about 20 minutes away from Singapore, we are adding another site that we've broken ground on and are starting to make good progress on. It should be done toward the back half of next calendar year. That'll again give us bigger capacity as we continue to sell a lot more servers and more importantly, a lot more racks.
... Yeah, maybe we can talk about kind of the margin and the ASP. Obviously, you know, the gross margin being 3x, the Taiwanese ODM is bigger volume to the value add from Supermicro, kind of, you know, pricing power. And also with additional overseas capacity coming on in a much, you know, 50% or 60% lower labor cost, by our estimate, should lead to better OPM going forward. And also the ASP uplift kind of underappreciated as the world move to, you know, next generation, kind of H200 is much higher than H100. You know, on the legacy traditional compute, you know, going to kind of Genoa Bergamo, that's also, you know, kind of double-digit increase in ASP.
So maybe you can talk about kind of the sustainability of this margin trend and actually potentially, you know, work up going over time.
Okay. So back in March of 2021, we put out a target financial model, and that's actually out on our website, so you can take a look at that. We set out with a target of top line growth of 17%-23%, a 14%-17% growth margin target. And then, we wanted to keep our OpEx below 10%, which we have. And in fact, we have one of the lowest, you know, OpEx rates, you know, we think in the server industry. If you look at our spend in OpEx, it's geared toward R&D.
We spend a lot of money on R&D, and we're very—and we will continue to do that, because our engineers, which are, by the way, among the best engineers, we think, in the industry, they work very hard and we focus on R&D. That's part of our DNA. Very, very fast to market, and we think that we think that's again a differentiator of our company. And we think that margins will continue to, we believe, be within our target range of 14%-17%, because of our speed, our building block solution approach to engineering, as well as our speed to market. And that gives us an edge.
So if you look at that 300% increase in the last two years, that coincides with the fact that the AI was driving our growth. And now, as these new technologies come out that we incorporate and use into our solutions, we think that we will continue to, you know, that very fast-to-market, first-to-market capability with the latest technologies.
Got it. Just in terms of the few kind of top catalysts over the next 12 months, maybe you can elaborate kind of, you know, so they continue the, you know, wins from additional customers, and so the additional share gains, you know, from both, you know, tier one OEM and kind of Taiwanese ODM, with the liquid cool, the kind of technology ramping up. So maybe you can talk about kind of a few things, kind of, you know, the investors can look forward to over the next six to 12 months.
Okay. So over you know, Supermicro has growth has always been part of its story. So between 2010 and 2018, which was the year that I joined, we had a CAGR of 21%. Now, like I said, the last two years, we've been at, we've been at 42%. But we think that, you know, we think that we've grown typically much faster than the server industry as a whole. In fact, the 42% in the last two years compares to a 7%-8% growth rate for the overall server industry. So we're basically, we believe that we've been at we've grown at 5x the last two years. And what that means is that we're taking market share.
Now, remember, we believe that we're—I think we believe that we're somewhere around 5%-6% of the server industry, so we just have to win one out of, you know, with a market size of probably $100-$120 billion. Formerly, we would say that we had to win one out of 20 deals. Now, with AI, we believe that the market TAM is actually up over $200 billion. And so, you know, we just have to be successful, in, you know, in like I said, in 1-2 out of 20 occurrences, you know, to make improvements in our market share. And so we're doing everything that we can to do that by bringing the very best products to market.
Yeah. So I guess, you know, it's a remarkable kind of, you know, Supermicro grew revenue rapidly through different cycles, right? If you look at last, you know, 5-10 years, irrespective of macro. So like some of the external factors, maybe you can talk about kind of beyond, so the, you know, the, the, the Supermicro control, you know, so to speak, kind of chip cycle, macro cycle, but you guys still grew revenue, kind of, you know, LBs could be less or more. You know, maybe you can talk about some of the kind of internal levers you can pull to, you know, self-help, to kind of continue to sustain this, double-digit growth.
Okay. Yeah, thanks. Thanks, George. That's a great question, and it gives me a chance to again reemphasize, you know, what our strengths are. And so during the pandemic, even though everyone in the industry had supply shortages, we were able to grow, and we did that because, again, of our architecture, the way we architect our products. The way we architect our products, we qualify a number of different components to be able to work in our systems. So if there's a shortage in one area, we're able to pivot and use different solutions. And so additionally, when we design our motherboards and our chassis, we design them to be able to work in a lot of different applications.
So that gives us the ability to, when there's shortages in one area, to quickly, pivot over and to use other technologies. We also can innovate, in some cases, to design our own components as needed. For instance, at one point, when there was a shortage on switches, we built our own switches, not because we wanted to be a switch company, but because there was a shortage. So I think our engineering capabilities and our abilities in design help us to have helped us, you know, to make it through several different, you know, shortage situations. And we think that that will continue to help us as we, as technologies become even more complicated, and also as the technology cycle has shortened and gets faster.
We also think that liquid cooling is again going to be something that helps the attractiveness of our solutions as companies both seek to reduce their electricity costs, as well as try to reduce their carbon footprint. Because a lot of companies do have, you know, ESG as a target. But what we always ask is, you know, are they doing that? Are they doing that in their data centers? Are they really, you know, investing the money to help, you know, reduce their carbon footprint?
We think if not for ESG purposes, perhaps it'll just be because of the immense heat that's now being generated in these systems that will move them to, you know, to embrace green computing.
Yeah, the additional two points, maybe not as well understood by investors, is, you guys have de minimis exposure to China, only 2 or 3% revenue, versus other direct competitors, kind of, you know, either manufacturing in China or selling to China, so kind of, you know, being restricted. And also kind of your close, you know, proximity to kind of key partners, since you guys are the only sort of a, you know, server delivery, you know, supplier kind of based in Silicon Valley. So maybe you can address these two differentiated kind of competitive edge.
Okay. Yeah. So we have been in Silicon Valley since 1993. We were founded in San Jose, actually, the same year as NVIDIA was founded, and actually we've been a partner of theirs for most of that time. We started in San Jose. We actually opened up... That was in 1993. I think it was around 1997, we started operations in Taiwan, and then later over in the Netherlands, we built a site there, assembly test site there as well. But we have over 5,000, approximately 5,400 employees, and with twenty, say, 2,600 of those in San Jose, down in the South Bay.
We think that we have an advantage to be close to where a lot of the server technologies are developed, and it gives our engineers a chance to work directly, not just on current product solutions, but on future product solutions. So we're kind of well out working on the next things. That way, whenever the products are released, we have a full set of solutions that we can offer to people. You know, we do have low exposure because, as everyone knows, GPU sales in China were unfortunately stopped. So you know, that represents about 2%-3% of our business.
We don't have too much exposure there. But we do like the fact that some customers like the fact that we manufacture you know here in the U.S. Others want you know the very lowest cost, and so therefore they like the fact that we can also manufacture in a lower cost region. So different customers have different preferences, but we like to be able to offer as much as we can, as much choices as we can to our customers.
So we have last minute left in our session before we wrap up. You know, maybe we can take some questions from the audience. I see this gentleman. Go ahead.
Yeah. At the AMD launch yesterday, Charles says we are growing, quote-unquote, "faster than fast." What does he mean by that?
Well, I think that, our historically, like I said, our growth rate was about 21% , 2010 to 2018. So, last two years, we grew at 42%, and we gave a guide, which was, you know, at the midpoint, you know, higher than that 42% for the year. So if you look at the overall industry growing at 7%-8%, I would say that that's faster than fast.
Okay. I'm just laughing because just hearing stories about your thirtieth anniversary party a couple of weeks ago. It sounded like it was fun, so just asked. Second question is, more seriously, he's at the AMD event, and obviously, you've been pretty supply constrained with NVIDIA GPUs. What's the opportunity with this AMD, the new GPUs and obviously the whole platform that they have? What does that do for the opportunities for Supermicro now that, you know, there's now a second supplier in the market?
Yeah. You know, we have been selling-
If you could quantify something, that'd be great, too.
Yeah. So, we have been, we've been selling AMD, you know, solutions for some time. I mean, out at Lawrence Livermore Lab, they have, we have added to their cluster, their Corona cluster, an AMD, AMD CPUs, as well as AMD GPUs. And so, you know, there are some customers that, who will favor, you know, AMD. And so we're, we're very glad to offer, to offer those products. And, and we, we know they will do well because they're a good engineering company. And, and, and they're-
For you, for you guys. For you guys.
Yeah. And so for us, we will. We believe that it will give the ability to... Because, you know, certain CPUs and certain GPUs will have better, better performance in certain applications. And so it gives us the ability to offer the very best performance in, based on the workload. And so that's really what it comes down to, is being able to have the, what we call the optimization, the process, the application optimization, to give the customer the very best experience. So we think that, we're excited about the AMD technologies that are coming out. And, and so, we, we're very glad to offer those products.
Okay. Last question. Just color on the equity offering, what was the point of it?
Yeah. So with our high growth, we needed additional capital to really purchase inventory. As a manufacturing company, we have to carry, we have to purchase inventory, carry it as we build it, carry it as it's an accounts receivable until it converts to cash. So, just with rapid growth, we need additional working capital. We, you know, this also gives us- it also gave us a chance to add some additional coverage from analysts, some, you know, really, really good analysts, to the company, to better get the story out to the investment community, and also get some, you know, some very good shareholders as well. Expand our shareholder base and our broader coverage and holding of our company.
Okay, great. Thanks again, David.
Okay.
For taking the time out.
Thank you for having me, George.
Sure.
Yeah, appreciate it.