Astera Labs, Inc. (ALAB)
NASDAQ: ALAB · Real-Time Price · USD
202.68
+7.94 (4.08%)
At close: May 1, 2026, 4:00 PM EDT
203.10
+0.42 (0.21%)
After-hours: May 1, 2026, 7:59 PM EDT
← View all transcripts

Morgan Stanley Technology, Media & Telecom Conference

Mar 5, 2025

Joe Moore
Managing Director, Morgan Stanley

All right, welcome back, everybody. I'm Joe Moore, Morgan Stanley Semiconductor team. Very happy to have executives from Astera Labs here with us today: Sanjay Gajendra, President and COO, and Mike Tate, CFO. So thanks, guys, for coming.

Sanjay Gajendra
President and COO, Astera Labs

Thank you, Joe.

Joe Moore
Managing Director, Morgan Stanley

So we did the IPO this year, so people, there's probably a fair amount of familiarity with it. But just give a little bit of an overview of the company and the types of products that you guys develop, for starters.

Sanjay Gajendra
President and COO, Astera Labs

Great. Thank you for having us, and good seeing all of you guys. So Astera Labs develops connectivity products primarily for AI and cloud infrastructure. We developed semiconductor, board, and module-level products that essentially resolve the bottlenecks as it relates to data, networking, and memory. And at the end of the day, customers worldwide use our products for achieving various different kinds of AI use cases, whether it's using third-party GPU-based platforms like NVIDIA and AMD, and so on, or AI platforms that leverage internally developed accelerators that are developed by the hyperscalers. So we have been servicing the market for a little over six years. And last year, March, we went public with business that's been growing rapidly that Mike can highlight on some of the numbers that we have.

Mike Tate
CFO, Astera Labs

Yeah, so we just reported our first full-year earnings two weeks ago. And we had very strong growth for the year, where we finished with fourth-quarter revenues up 25%. We're seeing very strong growth not only with merchant GPU-based accelerators, but also now the next wave of growth from internal AI accelerators. And we're seeing that with our Taurus and Aries product lines. But excitingly, looking to 2025, we're really diversifying the company. And in addition to our new Taurus Ethernet products that are starting to ramp in the back half of 2024, we also are starting to ship our Scorpio switch fabrics. And then later in the year, we are selling our Leo CXL memory expanders as well.

Joe Moore
Managing Director, Morgan Stanley

Great. Can you talk about, before we dig into the products and the markets, the sort of through line for your success and differentiation? You started with a base of kind of very strong capability in areas like Retimers, but you also have a software stack that's quite important to you. And I feel like relationships with the hyperscalers are a really important aspect of your success. So maybe just talk about those aspects of your differentiation.

Sanjay Gajendra
President and COO, Astera Labs

Yeah, absolutely. I think the simple high-level playbook that we have followed is to make sure that we listen to our customers. For us, we've been fortunate to have some marquee name customers be sort of our North Star. The second thing that we've tried to do is innovate, and the third thing we have done exceptionally well, we believe, is execution. This is a fast-moving market. The ability to innovate and deliver products at the time frame and the quantity that our customer needs. It becomes a significant advantage, competitive advantage for us. But specifically, if I were to answer in terms of what are some things that customers love about our products, first and foremost, all of our products have been developed from the ground up for AI type of workflows, meaning it is designed for GPU to GPU interconnect.

It's designed for enabling the highest bandwidth, lowest latency, and all that. The second important thing that we have done is our chips use a software-defined architecture. What it means is that about 60%, 70% of our chip is actually implemented in software that enables our customers to customize the chips to perform to their unique infrastructure requirements. It enables them to have higher performance. It enables us to go to market quicker because we have less hardware to build. And those are advantages that have played out significantly well in terms of the customer systems and how quickly AI is ramped.

The last, but probably one of the most important features is that not only have we developed a set of products that enable the connectivity within an AI server, and think of that as like your nervous system in your body, the nervous system that we have developed is also smart, meaning our chips are able to operate as the eyes and ears of the system, looking for things that are going wrong or that may go wrong, and provide telemetry and diagnostics that enables our customers to maximize the GPU utilization, a GPU that they're paying tens of thousands of dollars, and being able to better utilize that and ensure that the system uptime is as high as possible and the performance at the highest level that's possible.

Combination of the architecture, combination of our software-defined feature set, and our ability to execute has really allowed us to introduce four product lines in a short time of six years, and then rapidly expand and gain market share in this space.

Joe Moore
Managing Director, Morgan Stanley

You've done really well, and I think the competition that you have is kind of the biggest companies in semiconductors, from the Broadcom's and Marvell's to TI, Microchip to maybe in some areas NVIDIA and Mellanox. How have you moved so quickly? Some of your competitors will say, well, they're using licensed building blocks to get there, and so they might get first to market, but we can do it better. Can you just kind of address those things?

Sanjay Gajendra
President and COO, Astera Labs

Yeah, absolutely. I think a few things. Fundamentally, the reason why we've been able to get to market quickly starts off with having the right North Star customers so you're not developing a kitchen sink. So I think that's where we have been able to define products that solve the product problems in the most elegant way. The second thing that we have done is to leverage a software-defined architecture, like I noted. What that means is that if 60% of the chip is implemented in software, you only have 40% that you need to develop in hardware, meaning you can get to fabrication quicker than your traditional way of building a chip. So that's been a significant advantage for us. And third one, like you highlighted, is that we truly believe in adding value that addresses what our customers truly care about.

In our products, we use different IP blocks. Like SerDes is a good example. So the approach that we have taken is to essentially not try to reinvent everything. We have tried to address our IP requirements by, A, sourcing something that's available off the shelf, but it is good enough to solve the problem that's needed, or, B, in situations where it's not good enough, identify creative joint development models to address gaps, for example, with DSP circuits or digital signal processing circuits, ensure that we're able to add value through our own internal engineers, or the third option being doing it internally. Like in situations where we can't get something off the shelf, then we invest in developing it internally.

The point I'm trying to make is that in today's world, time to market, execution, adding true value is the most important thing you can do to build a business. The fact that you can leverage something that's available and think second, third order to optimize that in a way that your OpEx remains manageable while your top line continues to grow because you're able to address the market in a timely way, I think is important. And we've been able to crack that code, which has allowed us to very rapidly expand products and gain the market share.

Joe Moore
Managing Director, Morgan Stanley

Very helpful. Thank you. So maybe we dig into some of the products, starting with the Retimer business. You guys have had a dominant share of Gen 5 Retimers and AI. You are now first to market with Gen 6. Can you talk about how Gen 6 is going in terms of calls? And that business, in general, has been a very strong growth driver for you to continue to be through the year.

Sanjay Gajendra
President and COO, Astera Labs

Yeah, absolutely. So we introduced our Gen 6 Retimers last year around this time, February. And then we introduced our fabric switch devices around September, October of last year. So both the products are today qualified and shipping in pre-production volumes for some of the leading GPU platforms that have Gen 6 capability. We have several design wins that we are supporting on those. We expect the production ramp of that to happen in the second half of this year as industry is transitioning to Gen 6-based ecosystem. We have worked with, of course, ecosystem to kind of demonstrate that the interoperation performance and everything is met. So overall, we are at a point where the Gen 6 market is just ready to take off. And we have two products now compared to just one product that we have in the Gen 5 generation.

So for us, the dollar content per GPU, especially with the Blackwell-based generation, is going to be significantly higher than the Hopper generation because we have a switch being attached to every Blackwell that's being designed into a customized AI racks with the design wins we have.

Joe Moore
Managing Director, Morgan Stanley

Yeah, even before we get to the switch, I feel like it's important to understand the number of different configurations for something like a GB200, the number of different configurations that are out there. And I feel like we all at one point felt like we could define the retimer content in those systems. And it turns out there's a lot more content than we all thought. Can you just talk generally to the approach to market that allows for customization at the cloud level and allows for opportunity for you guys?

Sanjay Gajendra
President and COO, Astera Labs

Yeah, absolutely. And just before I answer the question, just for the audience here, a retimer device for folks that are not familiar is a device that essentially extends the reach of the signal. Think of it this way: as speeds are increasing in terms of the interconnect, the signal is unable to make the journey from point A to point B. Very similar to if you are trying to run fast, you will get tired compared to if you're trying to jog a certain distance. So the same thing happens with signals. And a retimer device is used to essentially extend the reach of the signal by amplifying it, removing all the noise, and then retransmitting the signal. Now, because of that function, it's a function or it depends on the size of the board.

When we go back to the Hopper generation, life was a little bit simpler. There was one board, like an HGX form factor that had eight GPUs, pretty big board, and the signal had to traverse a longer distance. Therefore, a retimer was required on the base boards that NVIDIA provided, but when you go to the Blackwell generation, you start thinking about like a GB200 or GB300, like you mentioned, Joe. These are smaller boards. They only have two GPUs, one CPU. So relatively speaking, it's about one-fourth the size of a Hopper HGX board that had eight GPUs on it. Now, when the size of the board changes, the placement of the retimer is no longer necessary on that small board, but it's instead required on a different place.

That's the nuance of the retimer is that the placement of the device will keep changing, but the very fact that the signals are doubling or quadrupling their data rate, therefore they need some help, does not change. So in this particular case, what's happening is that the retimer sockets have migrated over to the boards for networking and storage that the hyperscalers design. In other words, the socket remains, the placement is in a different place. And that is sort of what to expect with retimer type of devices is that as the hardware configuration changes, the placement will keep changing. And that's something that obviously is a nuance that has to be kept under consideration.

Joe Moore
Managing Director, Morgan Stanley

Yeah, that's very helpful. Thank you, and I wonder if you could also talk to the retimer opportunity in ASIC systems where NVIDIA does have NVLink, which is not a technology that helps you in the ASIC world. They don't have that, and they're using your technology to establish a lot of that same connectivity. Can you talk to that?

Sanjay Gajendra
President and COO, Astera Labs

Sure. So again, there are two ways of implementing an AI server today. One is sort of based on third-party GPUs like NVIDIA, AMD, and so on. And then you have the hyperscalers that are developing their own accelerators or their own ASICs. In an AI server, broadly speaking, there are two networks. One is what is called a front-end network, where the GPU is talking to storage, CPU, networking. And then there is a back-end network where the GPU is talking to other GPUs to create a cluster. In the case of NVIDIA, we only get to play in the front-end, where NVIDIA's GPU is talking to the CPU, storage, and networking. Whereas in the ASIC or internally developed accelerators, we get to play on both the sides, both on the front-end and the back-end.

The back-end, for folks that are familiar, NVIDIA uses a proprietary standard called NVLink, which again, we don't play there. So in the ASIC-based platforms, our attach rate tends to be significantly higher because we're getting to play in the front-end and on the back-end. And the back-end is a very fertile ground because each GPU in a traditional AI cluster connects to every other GPU, meaning if you have 64 GPUs, then you have each GPU connecting to the remaining 63, meaning there is a lot of connectivity opportunity for us. So overall, from an Astera standpoint, with the products that we do for PCIe and Ethernet, both Retimers and fabric type of devices, our dollar opportunity tends to be significantly high in AI servers based on custom ASICs or accelerators.

And these accelerators also tend to be slightly lower capability than third-party GPUs, which means that there are more of them needed, which means that there are more connectivity opportunity. So we get to benefit from that scale and volume as well.

Joe Moore
Managing Director, Morgan Stanley

Great. That's very helpful. And just a couple of other Retimer questions that come up a lot. Market share on Gen 5, are those sockets established at this point? Is there still any potential market share battle there? And then general-purpose Retimer and general-purpose compute.

Sanjay Gajendra
President and COO, Astera Labs

Yeah, so the Gen 5 battle has been done for quite some time. Those designs are already there. They have ramped up. We have hundreds of design wins that are all either in production or various stages of production. So I think that battle is done. So the focus right now, obviously, is on Gen 6. So all the critical sockets of volume, we have been winning those things or transitioning from Gen 5 to Gen 6. There is just one GPU in the market today that supports Gen 6, which is based on NVIDIA Blackwell-based GPUs. So we are essentially right now in various degrees of qualification for the multiple design wins we have and in the process of scaling that to high-volume production.

Joe Moore
Managing Director, Morgan Stanley

to [touch] on the market share. I'd like to [touch] on the last Retimer question because I did want to also ask another question that comes up. Those ASIC Retimer wins, the ASIC providers are also Retimer competitors. Does that give them an advantage in the Retimer part of the business?

Sanjay Gajendra
President and COO, Astera Labs

So does it give them an advantage on paper? Yes, I think. But we're also dealing with hyperscalers that are pretty sophisticated when it comes to how they manage their supply chain and how they manage their business. At the same time, we have a product that is proven, that's a workhorse in the industry, brings in capabilities that are uniquely differentiated. So to that standpoint, both because of what we bring with the product and how suppliers or hyperscalers manage their supply chain, I want to say that we continue to fancy our chances, which is obviously translated into design wins that we're seeing.

Joe Moore
Managing Director, Morgan Stanley

Yeah. OK, great. Thank you, so the second product line you guys introduced was the active Ethernet cable Taurus. Can you talk about that market and what's your progress there? It seems like the big opportunities as we start transitioning to higher speeds.

Sanjay Gajendra
President and COO, Astera Labs

Yeah, absolutely. So AEC or active electrical cables are primarily used for Ethernet. And then, of course, now we have it for PCIe-based back-end as well. So AEC, today, bulk of the Ethernet deployments at volume is happening at 400 gig. The challenge is the same as what I noted with the PCIe retimer. The speeds are increasing, but the reach requirements remain the same, which means that you need some kind of an active circuitry to deal with the signal loss. In the case of Ethernet, it's primarily used for interconnecting, let's say, a server to a switch. And that cable so far was passive cable, like passive copper. But again, because the speeds have grown, the signal is unable to make the journey from the server to the switch. That's where you start requiring active components.

For this particular use case, there are different vendors providing different form factors. There are some vendors that are just providing an Ethernet chip. There are some vendors like Astera that's providing a module that goes into the cable assemblies from multiple cable vendors who are established, who have a big investment there. And then you have some vendors that are providing the complete cable. So we believe that our form factor, which is a module, is designed for scale because it enables multiple cable vendors to essentially supply the cables in a capacity and scale that is important for the hyperscalers. And that's something that's bearing out. We started seeing a ramp in that business starting Q3 of last year. Q4, the 25% bump that we did sequentially was largely because of the growth in the Taurus module business.

That is expected to continue for the first half of this year.

Joe Moore
Managing Director, Morgan Stanley

OK, great, and then third product family, CXL 2.0, a little bit more of a general-purpose focus there, and it seems like you're actually getting quite a bit of interest around some of these memory multiplexing types of applications. Can you talk about your products in CXL 2.0?

Sanjay Gajendra
President and COO, Astera Labs

Yeah, so CXL is a protocol that's designed for memory expansion. Like, most of the CPUs and GPUs need some memory. And historically, the memories were directly attached to the CPUs. But what's happening is they're running out of memory bandwidth or memory capacity, and you use an external bus, which is called CXL. So for Astera today, every cloud vendor out there, hyperscaler, has got a CXL program. Unfortunately, CXL sort of had to take the hit when the focus shifted towards AI about two, three years ago. But obviously, the use cases are there. And we're at a point right now that there is clarity on where there is money to be made with CXL, meaning what are the killer apps, what are the use cases. And that's what we are catering to. We have been in pre-production racks for some time now.

We expect that the general compute-based platforms with CXL enabled will start ramping to production in the second half of this year, and that essentially opens the door for many more platforms to be deployed, which is expected as the technology gains ground and acceptance.

Joe Moore
Managing Director, Morgan Stanley

OK, helpful. Thank you. So then the product we didn't know about at the time of the IPO was Scorpio. In the switch market, which was a little bit surprising to me, it's a little bit more of an established market. Can you talk about your position there? And you've talked about it being your biggest market over time. Can you give us color on that?

Sanjay Gajendra
President and COO, Astera Labs

Yeah, absolutely. So for fabric devices, if you think of data center, there's really three main opportunities that you can cater into. There are the PCIe switches. These are used for interconnecting the peripheral devices. Historically, the need for this device was for storage appliances. That's where the incumbent solution was catered to. So we saw an opportunity to upend that market by delivering PCIe switches that are sort of purpose-built for the AI workflows. So we were able to make a significant dent in the market with several opportunities that we're catering to and with production ramps happening as part of the Blackwell deployment that's happening throughout this year. So that's the first class of devices. The second one is for scale-up. Scale-up today, if you think about it, NVIDIA has got NVLink and NVSwitch.

But for everyone else, it's sort of still a greenfield, meaning the deployment of scale-up. There are implementations using PCIe-like or Ethernet-like protocols. But we believe that for scale-up, UALink is the right protocol in terms of being able to provide the right feature set and the right ecosystem support. That ecosystem has been growing rapidly with the likes of Amazon and Apple and others joining the board. We are part of the board. We're driving the standard. So we do expect UALink to be a strong alternative to what NVLink and NVIDIA has done for all the non-NVIDIA ecosystem. And that we expect to start happening with revenue coming in 2026. The third kind of fabric switches that does apply to the data center market is Ethernet. That's an area that today Astera is not focused on.

Two out of three important fabric solutions is where Astera is investing in. That's an area of tremendous focus for Astera. We do expect that fabric devices would be sort of the aircraft carrier or the mothership for Astera, with the rest of the products coming all around it. That's how we're envisioning our overall product strategy as we start thinking of a rack-scale deployment of our connectivity solutions.

Joe Moore
Managing Director, Morgan Stanley

Is that opportunity different in merchant versus ASIC solutions?

Sanjay Gajendra
President and COO, Astera Labs

It is. So for the P series, it applies to both merchant and custom ASICs. The X series is what we call it applies specifically to the non-NVIDIA ecosystem, I would say. It does apply, of course, to AMD because they're committed to UA-Link. But it's a market that we believe in order to this greenfield and something that we can make a significant dent by continuing to execute on the product side.

Joe Moore
Managing Director, Morgan Stanley

Great, so switching gears, I wonder if you guys could talk to the health of the overall AI investment, and you guys have given pretty positive commentary for your business for the year, so I know what you guys are seeing, but I feel like people, every time we see a new innovation in models like DeepSeek, or we see somebody somewhere delays a building for a new data center, people sort of see that as the sign of the end of all of this. Do you guys see any indications of that, and just the stability of the investment, the duration of the investment, from your visibility, would be kind of helpful.

Sanjay Gajendra
President and COO, Astera Labs

Yeah, let me start off, and I'm sure Mike might add a few things as well. From our vantage point, which is fairly a good vantage point because we're on every major AI platform that's shipping today, and we have that front row seat when it comes to understanding what customers are thinking. In many cases, with some of the North Star customers, we are working with them shoulder to shoulder, not just for the current generation, but perhaps for the next two generations. We do have a fair bit of visibility of what's happening. First thing I would say is that, of course, there is a lot of surprise about models being efficient or becoming efficient. I think, to be honest with you, that is to be expected. We will see that happening, and there will be other announcements similar to DeepSeek and all that stuff.

What that is doing in our mind is it's acting as a catalyst to expand the adoption. We see real programs sort of getting accelerated or the understanding that, hey, we don't need a big honking GPU to do everything. What we have as a custom ASIC is good enough to achieve 80% of the workload. So there is that kind of realization that is happening and adjustments being made. But what we see also is that there is no slowdown in the volume or velocity in which new programs are being launched or new projects being kicked off. In fact, if anything, it's probably picked up significantly as we see Scorpio and other opportunities really growing in the last few months. So overall, the way we look at it is that the capital investment will continue. There's a lot of commitments made for 2025.

The programs that we are designed in, we're not seeing them slow down or push out, and all that negative connotation that you will imagine. The fact that things are becoming more efficient is acting as a catalyst for imagining what new things could be done, what things that could be done efficiently and more broadly. For us, it's all good news because as a connectivity solution vendor, we really don't get attached to what the GPU or accelerator is. They all need connectivity, and we are designed into many of these platforms, so it doesn't matter which GPU wins. We believe that we will always win as a connectivity vendor, so those are some things that we are seeing with the vantage point that we have.

Mike Tate
CFO, Astera Labs

Yeah, and what's also exciting is as these go faster and become more complex, we're seeing increasing dollar content per GPU. And we're working very closely with our customers. We can see their architectures going out beyond 2026 into 2027. And as we look at those next-generation designs, because of the faster speeds and the complexity, our dollar steps up. And not only for Aries like for like, but also just getting more Scorpio. And in particular, the Scorpio TAM, we measure it to be $5 billion in 2028. And we see line of sight to be a leader in growing its TAM.

Joe Moore
Managing Director, Morgan Stanley

And that agnosticism between custom silicon and GPU gives you a balance. And it seems like this year, ASIC's very strong in the first half, maybe GPU a little stronger in the second half. Is that the right way to think of it?

Sanjay Gajendra
President and COO, Astera Labs

Correct. Yeah, and also it comes down to, I think, one of the unique sort of advantages Astera has is that we're still a small company, nimble company, technology-oriented company. We have the right architecture, the right team that understands what some of these changes mean. So I've been in large companies. I know how hard it is to turn the ship around when things are changing so much. But what I feel good about is that the culture, the team, the execution focus that we created, and the architectural choices we have made allows us to make some of these adjustments as the market is evolving much more rapidly so that we can keep up with that one-year product cycle that we believe is important to maintain, meaning being able to go from PowerPoint to samples in about a year.

That's some of the advantages that we carry, which I believe are strong requirements to navigate through the market and come out as a winner.

Joe Moore
Managing Director, Morgan Stanley

Because you have that agnosticism, you might have visibility into the question of export controls as well. It's obviously impossible to know what may happen there. Your sense of, is that changing customer behavior now because there's uncertainty about what may come?

Mike Tate
CFO, Astera Labs

Yeah, I mean, it kind of ebbs and flows. But I mean, for us, our products are usually not subject to export controls. It's more the platforms we're being designed in. So in particular for China, we do see fertile grounds for growth there. But it is less than 10% of our revenues at the same time.

Joe Moore
Managing Director, Morgan Stanley

OK, that's helpful. Thank you. Great. Let's see if we have questions from the audience. We have a lot, I guess. Do you have a mic? Maybe start in the back.

Victor Lee
Senior Equity Analyst, Macquarie Asset Management

Hi, Victor Lee from Macquarie Asset Management. So I have a question about Cosmos. Maybe can you help us understand better about whether that being as a moat or glue vantage point with regards to, I think, your single source through pretty much every area that's around there. Is that going to prevent your customers from dual sourcing the Retimers and also in connection with your AECs and Scorpio or switch devices? Does that help you actually, or does that give you an advantage because there's compounding benefits because of Cosmos and telemetry or intelligence that it provides, or help us understand that as a differentiator.

Sanjay Gajendra
President and COO, Astera Labs

Yeah, and a short answer is it's definitely been one of our strongest competitive moats. Cosmos, for folks that are not familiar, is our software SDK that we provide that allows customers to have that telemetry and observability of what's happening on the connectivity. The way we have architected Cosmos is that it spans across product lines. Meaning today, if someone is using Cosmos APIs for Aries, they will be able to expand that to Taurus or Scorpio or other devices. So we have really been able to leverage that stickiness that we have created with customers and their engineers programming on these APIs and embedding that within their operating stack to be able to transition from, let's say, Aries Gen 5 to Gen 6. Or a customer that's buying Aries now, if they want to migrate to Scorpio, it's a relatively simple expansion of API.

But the basic framework is the same. So there are several things that customers are doing to expand it. And in some ways, it's similar to what CUDA is for NVIDIA, where they have engineers programming on their API. So Cosmos is the API for managing connectivity and managing fleet and various high-speed links that are there. So short answer is absolutely, that's been a very strong differentiator. And we continue to enhance that.

Joe Moore
Managing Director, Morgan Stanley

We have time for one more question. I think there's one in the front here.

Victor Lee
Senior Equity Analyst, Macquarie Asset Management

So there's a number of hyperscalers talking about a million cluster GPUs or custom silicon. Can you elaborate a bit on the complexity for you on sort of the networking side or on the connectivity side when you build those types of clusters, especially when you go non-NVIDIA?

Sanjay Gajendra
President and COO, Astera Labs

So is the question in the context of technical, or is it on?

Victor Lee
Senior Equity Analyst, Macquarie Asset Management

No, more in terms of sort of the value that you can provide, sort of the opportunity for you there.

Sanjay Gajendra
President and COO, Astera Labs

Absolutely. So there are several things that customers like about our products when they're trying to implement these giant clusters. It goes back to the fundamental thing of having that software architecture and having the sensors and telemetry and diagnostics. What happens with these giant clusters is that these need to be, when you bring it up, things don't generally work. And you need to be able to figure out what is wrong. So using the capabilities we have in our chip and software, we can pinpoint and say, in connector number 91, pin number six, it looks like the pin may be bent because I'm seeing a certain characteristic of the signal that implies that. So the ability to figure out where the problem is is such a critical requirement today because everyone is running a million miles an hour. The systems are being brought up in record time.

The ability to point where the problem is and even when the cluster is running, the fact of the matter is around roughly half the time the GPUs are waiting for data or memory, so our ability to monitor the quality of the link, tell the system software that this link is not performing at the highest rate. Therefore, the GPU is not being utilized. Again, these are goldmine information that allows the hyperscalers to go and take the necessary action. The third important thing is predictive failure. Because we are software-based, we are running a lot of algorithms on our chip. If a problem hasn't quite happened, but it's predicted to happen, let's say, 21 days from now, so we're able to model that and provide telemetry and information so that the system operators are able to take corrective action.

So the point I'm making is that as the systems are becoming more complex, you not only need a nervous system that is top class, but you also need a nervous system that is able to monitor itself, predict failure, look for issues. And those are things that we're able to build with the technology that we have. So overall, it's designed for scale and designed for these complex systems.

Joe Moore
Managing Director, Morgan Stanley

That's very helpful. Sanjay, Mike, thanks so much for your time.

Sanjay Gajendra
President and COO, Astera Labs

Thank you.

Mike Tate
CFO, Astera Labs

Thank you, Joe.

Powered by