Day two of Citi Tech Conference. My name is Papa Silla. I'm part of the Semian hardware team at Citi, and I'm very pleased to welcome, Jitandra and Nick from Astera Lab. I think this is your very first Citi Tech conference, so hopefully, this has been so many. I guess to get started, Jitandra, it has been a little bit over a year since your IPO.
You are the CEO, but also the Co Founder. So it has been quite a journey and you were able to very quickly be at the forefront of AI. So just quickly, can you tell us your journey of the company and how you were able to very quickly establish yourself as a AI enabler?
Yeah. First of all, thank you so much for for having us here. It's a pleasure. Welcome, everybody. So as Papa said, I'm one of the cofounders of the company.
We started the company towards the 2017 with a very simple mission, which was to provide connectivity solutions for cloud and AI infrastructure. And that's really what we've done over the last five or six years in our journey. And, really proud that if you were to, you know, send a query to one of the the AI systems that are available today, it very likely goes through one of the products, that Astera Labs makes. And the and the and the reason we started doing what we were doing is we really believed at the time that AI will need to become much more performant, much more powerful. And in order for it to do that, the many GPUs will have to talk together as one because the models will become large and they will not fit on a single GPU.
And and while everybody was focusing on the compute problem of how to build these really, really complex GPUs, we felt that the connectivity would become the real bottleneck. And that was an area that was being overlooked, and so we started working on connectivity solutions. The first product that we, worked on was Ares retimer, and we established, as many of you are aware, a very leadership position. It is part of the, the deployments that happened with NVIDIA GPUs, AMD GPUs, many of the ASICs, which is why I I made the comment that any AI traffic that you use today is likely going through one of our devices. So that, family is in full production.
We then, launched our Taurus product, which is, it does for, Ethernet, what Ares does for PCI Express. So it allows Ethernet, traffic to go longer, typically deployed in the form of an active electrical cable connecting servers and and GPUs to top of the rack switch for scale out, connectivity. Then we launched LEO, which was our CXL memory expansion device, allows you to add more memory to these, compute systems. Generally, they will be the general purpose compute systems, but there are also applications of using CXL memory expansion for AI systems. So these are the three products actually that we went IPO with, which was at the, you know, 2024.
And just gonna give you a sense of where we have been since. We ended 2023 with a 116,000,000 in revenue. We ended 2024 with $396,000,000 in revenue. And already for the 2025, we've done, about $350,000,000 in in revenue. And the notable, you know, achievements since our IPO, really the launch of our PCI six portfolio.
So, we now have our 86 family of of, PCI Express Gen six as well as the new product family that we launched, which is the Scorpio fabric switches, which which arrive in two different form factors, Scorpio p series and x series, all launched earlier this year, introduced in the October time frame of last year. And really proud of what the team has been able to accomplish with these devices. And and look, the industry has changed as well since we went IPO. When we went IPO, the servers were kinda the stall, you know, two u, four u servers, eight GPUs typically connected to each other over a small scale up network. Now if you look at a server, it's a whole rack, you know, like the the NVIDIA NVL 72 form factor.
NVIDIA introduced, Grace Blackwell, which was a huge opportunity for us. People didn't believe it for a period of time, but it did turn out to be a big opportunity for us. And in general, what has happened with these rack scale systems is scale up has become really a a a new opportunity, which is how you connect all the GPUs in the system to talk to each other. And we are benefiting from that with the the different products that we have. And then, you know, we just continue to work hard and bring more products to to to the market and help our customers.
Yeah. No. That's that's very helpful. Since you mentioned kind of AI, this small trend AI, so can you maybe you are very well placed in the AI ecosystem. Can you tell us a little bit where you think we are in the AI cycle?
And maybe a segue question as well. You mentioned scaled up. Obviously, NVIDIA with NVLink, Amazon with Ultra. I guess, how far can we go on the scale up kind of trend? Is there a limit to it, or do you think we'll continue to kind of
add So the first question first, which is a kind of a philosophical question, where are we in the AI cycle? And I think we are still early in the AI cycle. The reason I say that is, I mean, clearly, there has been magical amount of progress since the initial ChatGPT moment. What AI systems can do now, nobody would have imagined. We certainly did not imagine trillion parameter models and AI doing what it does today, you know, back in 2017, 2018.
But nonetheless, as consumers, you know, if you if you have a Tesla and you use the self driving or you ask questions to the the different, you know, chatbots, the answer is still not quite there. There's significant improvements, but the answer is not quite there. So I still believe there is another 10 x, 100 x improvement that is needed. And I'm also very confident just from my personal experience and what I see around me in our company, how we use AI, that the AI usage is here to stay. I don't think there is any doubt that, you know, us as consumers and businesses will continue to want to have AI systems as part of just our daily routine.
So if you look back from, you know, five years from now, ten years from now, we'll absolutely see AI in its early innings. Now is the rise from here to ten years is gonna be monotonic? I don't know. You know, if you look back at other previous industrial previous revolutions, whether it's Internet or PC or social media, they've all gone through ups and downs and this is a high-tech business, it might go up and it might come down. It's hard to predict.
Our job at Astera Labs would always be to try to come out ahead. If the market is going up, we try to grow faster than the market. If the market is going down, we still try to grow faster than the market.
Got it. And then I guess I can jump in into your products, starting with the latest, the Scorpio, more specifically the X. I guess, first, explaining it a little bit for those that do not know exactly what it is. And maybe second part of the question, last quarter, you mentioned engagement going to 10, which is very rapid. Can you maybe touch a little bit on those drivers?
What are the mix of customers you can share and and kind of the optics of that that product?
Yeah. You asked two questions. I always forget the second one, so please remind me. Your second question in the previous time was what is scale up? And I think that is a good segue also into what the Scorpio product line is.
So if you look at scale up, what scale up is trying to do is to connect many GPUs together in a way that you make all of these GPUs look like one large GPU. Almost as if from a programming perspective, if I'm a software programming writing code that runs on the GPU, I want to just access all these GPUs as if they're one. I don't wanna know the complexities of networking and this and that. And and this is what scale up networking does. The amount of traffic, amount of data that needs to be exchanged between these GPUs as they try to access each other's HBM is just immense.
And so in so scale up network is therefore characterized by a homogeneous system where you have all of these GPUs, connected to each other, all trying to access, you know, each other's memory and an all to all connectivity. And they do that by having the fastest possible data rates and then aggregating multiple links together to get the right amount of bandwidth. The scale up network ends up, kind of very rich in terms of the amount of connectivity that is required. And that also is actually what limits the reach of scale up. So if you look at NVL 72, it puts 72 GPUs in a scale up system.
If you look at the the, the last GTC conference, then Jensen said that, hey. 72 is now going to one forty four, two eighty eight, even five seventy six, GPUs connected together and scale up. If you look at the UAL standard, it defines scale up to 10 ten twenty four GPUs. It doesn't quite go more than that because it's just it's very difficult practically to put all of this very, very high speed data to into very large networks. So that's when you end up going into scale out.
And scale out is typically done over Ethernet. But coming back to scale up, our Scorpio X family is uniquely designed for scale up for customers that use PCI Express or PCI Express based protocols for scale up. Now why is PCI Express important? PCI Express is important because it is a native protocol that you find it's a protocol that you find natively in GPUs and and ASICs and so on, and it is designed for, this memory semantics. It is designed for low latency.
It is designed for good throughput. So it has many characteristics. This is automatically just coincidentally, if you will, lend itself for building, scale up networks. So so our customers are using, PCI Express. Many of our customers are using PCI Express.
And to answer your question about the engagements that we have, we mentioned at our last earnings call that we have more than 10 engagements with Scorpio X product, which is used for scale up in various stages. Some of them are confirmed design wins, others are exploration, some are in the evaluation phase. And these customers comprise of hyperscalers, all the obvious names, these are global customers, so many are in The U. S, some are outside The U. S.
So just kind of a wide variety of customers. And people find it surprising. And to be honest with you, we found it surprising, too. So we came out with our Scorpio family of devices in October at OCP, and we thought we'll have some amount of engagement with the customers that we knew of. But once we went public with the capabilities of Scorpio, then we got customers kind of coming out of the woodwork, hey, I am also using PCI Express or PCI Express based scale up network.
So it has been just fantastic. The amount of customer engagement that we have is just you know, I've never seen that before in my life. So that's Scorpio X. Scorpio p, p stands for PCI Express, easy to remember. So Scorpio p is a standards compliant product, which which will allow you to connect multiple PCI Express devices together.
So for example, a good good example might be a Blackwell GPU that's that's running PCI Express six connecting to NICs that might be running PCI Express five, for instance, or connecting to a CPU that could also be PCI Express five or six. So we enable this type of connectivity. The applicability of of Scorpio P series is very wide. It can get used in many applications, typically used for scale out. The the first one that we are ramping to production with is in scale out application to connect customized racks using NVIDIA's Grace Blackwell system and deploying it in a in a cloud data center.
And and for PE, in terms of the customer, it's mainly driven by, obviously, the main GPU provider. How how wide what's the breadth of customers you have for PE as well?
Yeah. So we are engaged with many customers. The the again, like I said, just like for ScorpioX, the engagement is through the roof. The limitation, with PCI Express is the PCI Express six ecosystem is still fairly new. Right?
So there is only one PCI Express Gen six capable GPU out there, which is the which is the Blackwell GPU. And that's where our our product is now deployed. You know, we've been now shipping in production volumes for both, the Ares gen six device as well as the Scorpio P series, device. And then as other, GPUs come to market that support PCI six, other a a six come to market that support PCI six, we will start to see, you know, a a larger deployment and larger customer base. And we expect that to happen not in 2025, but in 2026.
Okay. And in terms of sales of Scorpio, I think you mentioned above 10% this fiscal year. Is most of that PE what's kind of the ratio of PE to x this quarter this fiscal year?
I don't wanna get the numbers wrong, I'm gonna punt that too.
Yeah. So very nice start for Scorpio and started ramping in q two time frame. And we said that continued growth is expected for the Scorpio product family into q three into q four and then obviously into 2026. But if you look at the profile of what's driving the demand for Scorpio in the near term, it it really is the Scorpio p series family, and the application is NVIDIA based NVL 72 racks that are customized. So that's what's gonna continue to drive revenue growth in the near term.
What we've said is we expect initial ramps and deployments of x series probably at the tail end of this year, but really much more of a 2026 story. So the exciting thing for us, and Jachendra just outlined it, you know, both on p series and x series, there's a wide level of engagements across multiple customers kind of branching out from the lead customer that's deploying today. So as you look into 2026 and beyond, we would expect a, you know, much more broad profile of of deployments across multiple applications and multiple customers. So pretty exciting.
And for for x, do you see x in terms of sales crossing over p being kind of larger than p by 2026?
Yeah. So I mean, that's really more a function of the market. Somewhat we've outlined the market opportunity, at least initially when we launched Scorpio product portfolio in OCP last year. We said it was about a $5,000,000,000 market, about half half between p series and x series. I think since then, you know, kind of based upon a lot of the momentum and the engagements on the x side, we probably think the x x is even bigger than that two and a half.
And then I'm sure we'll talk about UA Link shortly, but UA Link then provides another expansion of that content story and that overall market opportunity. So just based on the market dynamics itself and obviously our ability to execute and to grab sockets and drive revenue there, X certainly has the profile of something that could be much bigger than P over the long term. But not to deminimize the p opportunity as well. You think about everybody uses PCI Express across their their platform, so it has the potential to be very broad. X series will be, you know, focused on least initially on folks using PCI Express or customized PCI Express type protocols to do scale up.
So, you know, that's gonna be a segment of the market and potentially not the entire market, but the content is very, very rich on the back end. These are very critical sockets that are focused on by the hyperscalers because it is highly important to be able to drive reliability between these accelerators when you're scaling up. So both huge opportunities, but, yes, I x has the potential to be bigger. When does the timing of that happen? It's a little bit up in the air.
It'll depend on customer launches and deployments. Could it happen next year, potentially? But I I think, you know, definitely by 2027, we could see something that gets pretty close.
Good. No. That's very helpful. And and for those of us who are kind of super impatient and kinda already working on a bottoms up, any color on content per GPU on kind of lanes or any any color?
Yeah.
So we've been so the P Series specs are out there, so I think everybody's got an idea of that. And then X Series, we've been a little bit more stealthy, yeah, just because of our customers and the platforms that they're working on are are proprietary to them. And then the other thing I would say is that, you know, Scorpio p series or x series is not gonna be one solution. It's gonna be a portfolio of solutions in each of those categories. So there will be a range a range of products that have a range of lanes and ports and and different feature sets and functionalities.
So I don't know if there's a rule of thumb to really kind of, you know, put on each one of those. I would say, in general, like I just mentioned, the content opportunity on a per accelerator per accelerator basis for x series tends to be more rich. So we will see higher content there. So if you look at maybe just taking a step back at the overall evolution of, you know, how we've approached the market, you know, if you go back a couple years ago, you know, we were grabbing only maybe 50 to a $100 of content on a per accelerator basis within these systems. As you turn the corner to 2024, we started ramping, for both scale out and scale up applications.
It kind of expanded that to, you know, probably a little over a $100 of content per accelerator. But Scorpio has really been the key unlock for us to to ramp that in a more aggressive way. So now we're seeing on these p series platforms multiple hundreds of dollars of content opportunity on a per accelerator basis. The next step then becomes with x series where you can see that in you know, expand by, you know, potentially multiples up from where we're at today with the p series. The goal is to get, you know, a thousand dollars plus per accelerator and really staple as many dollars as we can to every single accelerator going out the door.
And and, again, we'll talk about UALINK in a minute, but that'll be, like, the next phase that layers on top of that as well.
Got it. No. That's that's very helpful. And and you mentioned UALINK, and our understanding is 2027 is really the year for it. But April, I think, this year, there was the one point zero specification.
What have you seen since that one point zero specification? Are product already kind of rolling out from that one point o, or are kind of hyperscalers and other customers waiting for further kind of specification before hitting the ground. Right?
Yeah. I think the short answer to your question is people are already moving because there is such a lot of demand. But maybe if you take a step back and and for for the in the folks in the audience who may not be familiar with UELink. So I mentioned earlier that PCI Express is used for scale up because it's got, you know, very good characteristics in terms of the memory semantics. It offers the lossless networking.
It offers, low latency, high high throughput, etcetera. Now one of the things that it does not have is the the fastest speeds of Ethernet. So today, PC Express is running 64 gigabit per second, whereas folks are trying to deploy Ethernet at 200. So what UALink does, it takes the best of both. It takes all of the the software goodness, the protocol goodness of PCI Express, and the high speed of Ethernet SerDes and combines them together.
Now you get a a protocol that's basically purpose built ground up for AI workloads. I mean, the protocol actually takes into account how AI training gets done, how AI inference is is getting done, and what would it take to get the best throughput from a system. It does that on a protocol basis and then runs it at the fastest speeds of Ethernet 200 gigabit per second. So that's what's UOLINK, and we are a a promoting member on the UOLINK consortium. So we've had a good view of, you know, what everybody is doing both in terms of vendors like us who are building for the UELink ecosystem as well as all the hyperscalers who are members of that same consortium in in in terms of what their plans are.
And I can tell you that, you know, every single customer that we are engaging with on ScorpioX has some plans to go to ULE. Now, you know, I don't know whether all of them will go to ULE, but some of them will go to ULE. But there is a lot of interest in our our customers are telling us, you know, without a doubt to focus on ULE. So that's what we are doing. To your point, the specification was released in April, the the one point o specification that AMD contributed to.
And that's got enough detail, a lot of detail in it for people to start working on. I don't know what others will do, but certainly, we are not staying still. And it's a good evolution of what we have done with ScorpioX, PC Express, everything that we've learned about what happens in scale up, what makes it what what make what are the challenges with it, we will fold those into designs that come out supporting ULE. And you if you just follow the typical semiconductor design cycles, you would start to see some products towards the end of next year, second half or end of next year. And therefore, in terms of real revenues, that'll probably be 2027.
Until we get there, until we have a a a vibrant UOLINK ecosystem, people will continue to use what they're using today. So, you know, NVIDIA will, of course, keep using NVLink. Folks Folks that are using PCI Express will continue to focus on PCI Express, and those who are using Ethernet will likely continue with Ethernet until UILink arrives.
Yeah. No. That's that's a good segue. I I guess the scale of opportunity outside of NVIDIA, which is obviously using NVLink is a kind of emerging kind of market, and many kind of players are jumping in NVIDIA with Fusion and Broadcom with Ultra. I guess how do you see the competitive landscape playing in that emerging space?
I think there will be three large ecosystems. The one that is very clear to see is NVIDIA. NVIDIA will have their NVLink, and they will continue. And, I mean, NVLink is the largest scale up largest deployed scale up network. And NVLink was also purpose built from the ground up for these AI systems and actually started for those of you who might know, NVLink actually originally started from PCI Express and also did, what ULEink is doing, which is take Ethernet service and kinda marry that up.
So all of these different standards, Ethernet, NVLink, ULE Link, they will all be running at around 200 gigabit per second. You know, we don't know exactly what the next generation of NVLink will be like. But if you look at where they are today, they are all running at roughly the same speed. So that's the NVLink ecosystem. Then there will be an Ethernet ecosystem, which is what Broadcom is proposing with their scale up Ethernet.
Now Ethernet is a wonderful protocol. It does phenomenally well on scale out, but it is not natively meant for scale up. It does not have the same characteristics that I described for scale up. And so what what Broadcom is proposing is to add up different things, which are really concepts that are already been there in PCI Express for generations, are already there in NVLink and bolt them on to Ethernet and call it scale up Ethernet. So it addresses some of the limitations of scale up Ethernet.
And of course, Broadcom is the 900 pound gorilla in the Ethernet space, so that's what they are pushing. And I'm sure there will be some customers that will deploy, you know, a scale up Ethernet protocol based on Broadcom switches and so on. And then there will be a third ecosystem, which would be UOLINK. Now the advantage of UOLINK is that, a, from a protocol standpoint, just as I mentioned before, we think it's going to be superior than Ethernet because it is not backwards compatible to anything. There is no baggage with UOLINK.
It does one thing and it does that one thing really, really well. However, the other really, you know, reason people are focused on UOLINK is because it is an open ecosystem. You are not beholden your destiny is not in somebody else's hands, so to speak. We expect there to be a very vibrant ecosystem of UOLINK providers, vendors such as ourselves. And our strategy for UOLINK will be very similar to what we have done for PCI Express, where we provide not only the switches, we are providing the retimers both onboard as well as in the AEC format.
And so when we go to UOLINK, our vision really is to operate at a rack scale level, where we provide all of the connectivity solution at the rack level, where you will start with the switching content that allows all of these GPUs to get connected. You will have auto and screen conditioning component, it will start with copper, but eventually maybe go into optical as well. And so we provide all the chips that are needed, some hardware that is needed, software that is needed to manage all of this for UOLINK. And then anybody who wants to leverage the UOLINK ecosystem can build one infrastructure and in an ideal world, they can plug in different types of GPUs. And if you look back at what people used to do just as early as 2023, this was the model.
You would buy GPUs from NVIDIA or AMD or your own ASIC accelerators and you will build and you plug them into your structure that was designed by the data center operators. Now the world has shifted a little bit with the NVL 72, but I do think that the our hyperscaler customers want to get back to that open ecosystem where they can innovate and differentiate their solutions. And ULink gives them that ability.
Absolutely. And I I wanted to kinda move back to some of the other products. Obviously, the retimer, that's kind of what you were originally known for, and I I believe it's kinda majority of your sales. I'm still kind of if you can just talk about the mix of retimer and just the perspective of that business moving forward.
The tourist business or or the Ares business?
The Ares business.
Yeah. Retimer. So, I mean, Ares has been the flagship part since inception, and it's really, I think, importantly, been widely deployed and and given us a footprint across, I mean, almost every single server that's been deployed from an AI perspective or AI application perspective over the last couple of years. So we have a very nice entrenched position there. We have a substantial market share, position, as Jachendra mentioned earlier, and we're just getting into this early phase of of moving into PCI Express Gen six.
I think the other big, point that we saw, and this started in 02/2024, is the uptake of internally, you know, developed ASIC based programs or ASIC accelerators, where we not only started, you know, continued to play on the scale outside connecting CPUs, GPUs, networking storage to branching out and supporting these clustering applications for scale up. So that was really, you know, the next big growth driver for Ares, and we've seen, you know, our a Ares module business, the Ares SCM business grow very nicely, over the course of the last, call it, twelve months as hyperscalers have begun to use those products to scale up. And like I mentioned earlier, their attach rates, the ratios there could be very fertile, especially relative to the scale out. So Ares is plugging away quite nicely. The Gen six transition is on the horizon.
Gen six is going to be important for us from a signal conditioning standpoint because we will see higher ASPs. So the ASP should increase on or about 20% versus prior generation solutions. But also the physics problem. Right? Speeds are going up, and it's gonna be more difficult to move signals across the similar distances at higher speeds.
So you're gonna need retimers in places that you just didn't need them before. So the unit profile and the attach rates at Gen six will also drive a unit growth story for us. So very excited there. You know, we haven't talked about general purpose much so far, but, you know, there's a there's a tail in the curve as well as general purpose starts to transition to gen five. So those are also places where we have designs, and and we'll see revenue momentum as well.
So, yeah, Ares is alive and well. It's gonna grow in excess of 60% this year, which is great, but we have big expectations for it to continue to grow nicely going forward.
Yeah. And and in in this space, there's obviously bigger kind of semiconductor name that are entering it given how well you have done. I I can mention Marvell or Broadcom, I guess. Can you just explain to us what will kind of help you maintain such high level of share that you that you have so far?
Maybe, you know, we can go back to history a little bit. So when we first introduced our eighties retimers was back in 2019. I think July no. July is when we got silicon back. I think October, November, we presented that in in Taipei at PCIC event.
We were not the first ones. Actually, we were the fourth retimer company. And the reason we succeeded was the architecture of our chip was really based on software. So as much as we can do in software, we do in software, and that's a design principle we have carried forward to all of our our designs, and we offer that to our customers as our Cosmos software system. And what Cosmos allows you to do is it allows you to figure out issues very rapidly.
You know, it is basically like a protocol analyzer that's shipping with every chip. And so when there are problems, you can figure them out very easily. But much more importantly, we can also fix them using this Cosmos software. And so over time, we are able to get this lion's share of the PCI Express gen five market because the protocol was new. Not everybody interpreted the spec the same way.
And, you know, in other words, there were bugs both in our chip and and the link partners that we are working with. We're able to identify them, and we're able to fix them. That's what allowed us to gain this this kind of market share. And then on top of that, we offer a a a, you know, a lot of diagnostics capabilities, customization capabilities, optimization capabilities to our customers that they're all using and deploying the Cosmos software in their stack. So now fast forward to PCI Express Gen six.
The easiest thing that the customers can do is just do an upgrade from our Gen five to our Gen six. It's completely seamless. The same software supports it. All of the the knowledge that we've acquired over the last several years of PCI Express gen five translates into PCI Express gen six. And that is the reason why we are able to to bring up gen six.
So we announced gen six right around our IPO time. And coincidentally, Broadcom announced their retimer also, you know, very coincidentally three days before our IPO. Fast forward one year, ours is now shipping in production quantity. To the best of my knowledge, nobody else has got even their samples, you know, working well. And again, the reason for that is all of the rich history that we have with our PCI Express Gen four, Gen five products, how it rolls into Cosmos, and and just the collaborative kind of working environment that we have with our partners.
So this Gen six ecosystem, you know, our customers, our partners realize is gonna be a tough one. So we've been working on this project for two years with our lead customer, with the with the GPU platform provider how to bring this ecosystem out, which is a combination of, you know, both the Ares device, retimer, that was your specific question, but also the Scorpio switch. So just to make everything work, and I'm I'm really proud of, you know, what the team has been able to put together. And just like with Gen five, we we found issues. Right?
We found issues in our chip. We found issues in our partnerships, but we were able to put the fixes on our chip. So if we take our chip, you know, the hardware, and we give it to one of our competitors and say go sell it, They still cannot sell it because it doesn't come with the the software that really gives us the gives the magic. Now having said all of this, our our our competitors are, you know, big names. They are smart people.
They have great engineers, so they will get it right. But there's a huge first mover advantage in this industry, which is, you know, which is what we have today. And so, I mean, I every day, I feel more confident that, you know, the story that we had for gen five will repeat itself at gen six. But then the battle would move to gen seven, and and then we will fight the battle on gen seven.
Sounds good. I wanna pause for a moment and see if you have any questions in the audience. This one.
Thank you very much. Just wanted to, you know, curious about your statements about three large ecosystems, you know, appearing in the data center over time. From my perspective at least, so far what we've seen is more or less a winner take all market. So when one technology, whether it be GPUs or chips or Ethernet and so forth, when one technology has a clear TCO advantage or performance advantage, it seems to be adopted quite rapidly across all the data centers as I can you know, the the industry standard. So curious to hear why you don't think that will happen with, well, you know, with these ecosystems.
Why would they live kind of easily in in in in three rather than just one taking taking the dominant share? Thanks.
Yep. So I think the the the first one, the NVIDIA ecosystem is very clear. Right? They they have NVLink. It does wonderfully for them.
You know, they can define what next generation of NVLink will look like. So they will continue down that path. And, you know, some customers may customers may choose to go to the NVLink route via NVLink fusion. So that that opportunity is there. For the rest of the customers, they have their own paths.
Right? You know, like like we discussed earlier, some of them are using PCI Express, and they are comfortable with what PCI Express offers. And there are other customers. I think the perhaps the only one that has gone public with what they're using in the back end is Intel with their Gaudi platform, which uses Ethernet RDMA based scale up. So it's quite likely that in the near future, they will continue to use that as well.
But the standard Ethernet has significant performance limitations over a protocol that is built ground up for scale up. So to move forward, why wouldn't one win all? I think it's hard to predict. If you were to bet on one, I would bet on Unlink just because of, you know, what that offers from both from a technical perspective as well as openness of the ecosystem. If you go talk to Broadcom, they will say exactly the opposite.
We will bet on Ethernet. Everything is going to be Ethernet. So, you know, to be completely honest, everybody is kind of talking to their their strengths. And in one year from now, we will see, you know, who was right. And then, again, to the to the the the essence of your question, would it be a divided ecosystem?
It It is possible that it becomes divided. Like if you look at five hyperscalers, Amazon does their thing. Google has got actually a proprietary ecosystem, which is neither PCI Express nor Ethernet. Will they change? I mean, I don't know, but everybody who has got something working likes to continue to enable that for the next generation.
That's what we do. We don't ask our customers to make wholesale changes. We say, hey, look, are used to the software or the hardware approaches that you are using for this generation, maybe it's PCI Express based. Continue that with UOLINK.
Yeah.
Yeah. So, I mean, ultimately, it's gonna be a higher perform or more performance solution. And, again, it it's it's I I can't provide a one size fits all because there will be multiple products. So you'll see a range of ASPs, a range of value that's provided to customers. But when you think about the application itself and the fact that you're scaling up across bigger and bigger cluster sizes, the connectivity between each one of those endpoints becomes very, very critical because each one of them could be a single point of failure such that if they go down, the entire cluster productivity suffers.
So there's a tremendous focus by our customers on providing the highest performance, most reliable solution possible into those critical sockets. And as a function of that, we just see higher value. But in addition to that, I mean, we will have, you know, higher protein solutions as well, and we'll address a wide range of of of of products there just to support whatever the customer wants. But in general, the content will be higher on the back end.
I mean, the simple way of looking at it is your GPU costs tens of thousands of dollars, 25,000, $30,000, what have you, and that is just going up. If you are not providing the right amount of connectivity, it's like you've got a race car that's got no tires. It's just sitting idle. It is, you know, very important for our customers to provide the right level of connectivity and robustness, as Nick mentioned. And pricing discussion actually comes, you know, very late in the game and because we are able to unlock all of the potential of the GPUs that they're investing billions of dollars in.
I wanna I wanna use that last thirty second to give you an opportunity to maybe discuss one area of the business that's misunderstood or you wanna kind of double down in before we close out.
Maybe the one thing that changed since our last earnings call was just this focus on our rack level vision. Because as you said, we've been successful with the PCIe retimers. Most of the world looks us as a retimer first company. Hey. What's happening with the retimer?
And, oh, by the way, what about all your other products? I think with the with the fact that in the last quarter, we had our Scorpio family go 10% of revenue, I think folks finally realized the potential that we have for Scorpio. And the most misunderstood thing about Astera is really we are not just a retimer company or a kind of a simple connectivity company. Our vision really is to provide full rack level connect connectivity, what we call AI infrastructure two .o. And we want to own that not only from a semiconductor perspective, but also hardware as well as the software that we provide to deliver this rack level connectivity.
Absolutely. Sounds good. Thank you so much for your time.
Thanks.
Thank you, everyone.