I'm a systems analyst at Roth. Usually, you know, I have to go back and look at somebody's bio when I'm meeting them, but I've known Nick most of his career. We started talking to each other when he was working at Netlogic and Broadcom and MaxLinear. He's been around the semis industry for a while, and he's had the fun opportunity to help Astera Labs come public recently. A very exciting story in the AI space that Roth has been along for the ride with and privileged to have you here, Nick. It's really great to talk about Astera's story and the success level because, you know, the numbers just keep surprising us, and they're pretty amazing. Thanks, Nick, for your time, of course. Welcome. The maybe just...
One of my companies told me this morning that the technology they built two years ago is already outdated and wouldn't really be useful in today's effort. That's how quickly this is all changing. Maybe you can talk about where you sit and how you're seeing that these models are evolving so fast that your technology's helping people keep up. We'll get into kinda what Astera does in this, kinda in through the questions, but maybe we could start there and what you're seeing 'cause that's the flywheel you guys are helping support, I think.
Yeah, definitely. Hello. Okay. Yeah, thanks, Suji, for the invite as well, and I appreciate having a mic. I feel like I'm channeling my inner rapper right now or something, but
Got to station it.
Yes. But yeah, I mean, the treadmill is running really fast and, fortunately for Astera Labs, we get to have a ringside seat to the speed and the cadence that we've become accustomed with. We've been fortunate enough to be able to stay on that treadmill and not get knocked off as things continue to go very quickly. As an example, I think when we came public, you know, just a little over, you know, right around two years ago, an AI server was defined as an HGX Hopper-based system that had, you know, 8 GPUs on it, and that was the extent of an AI server two years ago. Fast-forward to today, and now we're scaling up over 72 GPUs within a single rack and looking to branch out beyond that.
You had Jensen at GTC last week talking about scaling up across over 1,000 GPUs. Astera Labs, in terms of positioning within that infrastructure, is in a very unique position such that we are the nervous system of these boxes, and we're helping to connect very important endpoints such as GPUs but also CPUs, networking, interface cards, memory, storage. All of these things are tal king together and to each other through typically protocols such as PCI Express. We've benefited you know just you know due to the increasing speeds of these boxes and the increased size of the clusters. You start to run into a lot of problems when you talk about both speed and distance. We've had a nice ringside seat.
Execution has been great over the last couple of years, and what it's afforded us on a go-forward basis is a seat at the table with hyperscalers defining what the next, you know, generation and the next generation over after that is going to look like. What are the challenges going to be in those boxes from a connectivity standpoint, and how can we help to service those? To your point, the secular trend is very much alive and well. The demand for just raw compute is going off the charts. Our customers and our partners are all trying to solve how to deliver that compute in the most productive and efficient way possible. You know, that's where we're really linking up with these guys.
Yeah, I mean, we're well-positioned. We benefit from the secular growth trends, but we also continue to bring out new products and deliver new solutions so that we can grow faster than the market.
I promise the audience we'll go through the products Astera offers, and the next question is the question before that that'll lead to that. You know, what is it about. You've been in Ethernet and Broadcom. You've seen the cloud development. What is it about these AI workloads and infrastructures that is so demanding of the connectivity part of this equation, you know, that you see that you have to adapt your products to?
Yeah, I mean, it really is kinda coming back to the speed and size or speed and distance. You know, every generation, we're pushing forward, you know, starting at, you know, for Ethernet, for example, going from 25 Gb to 50 Gb per lane, just starting to kick off 100 Gb per lane, going to 200 Gb, going to 400 Gb. Every single time you go to that next generation of speed, it limits the amount of distance that you can travel and send signals with high fidelity. While that is happening, you're going from boxes that have 8 GPUs up to 72 up to hundreds of GPUs. The distances are expanding at the same time that the speeds are going up.
This creates a tremendous amount of bottleneck around the connectivity portion of the system when you know, compute needs to happen at incredible speeds, but you need to take and have a pipe that's gonna be able to transport that, those signals back and forth. I would say that that's the hugest piece or the largest piece of the problem. I think the other piece that ourselves and others are trying to solve as well is no two clouds are the same. No two hyperscalers are the same. They all have different ways of approaching this problem. They are all building their own internal solutions that have different characteristics and attributes. They're also leveraging merchant solutions from the likes of NVIDIA and AMD as well.
You know, from a connectivity standpoint, you really do need to leverage your focus on open platforms, open systems, open protocols. You need to have flexibility built into your architecture. You need to have a software framework that's able to adapt to a multitude of different processors, a whole slew of different brands of memories and network interface cards and CPUs, and it all needs to work together. Regardless of how it's mixed and matched. It's not as simple as just bringing a chip to the table that's performance, and it can support speeds and push distance. It's also needs to be able to interact with a whole variety of disparate endpoints and do it in a way that's invisible to the customer.
At the end of the day, these guys are, you know, fleet managers sitting in a data center presiding over tens of billions of dollars worth of infrastructure. To monetize that, you need it up and running and performing at the highest percentage that you can. We can sit in the connectivity nervous system and be the eyes and the ears of these systems and provide valuable feedback and insights to these fleet managers to notify them if power surges are coming online, links are going down, you know, all types of different analog attributes and therefore, they can take that data and go solve problems before they become big problems. That's how we're trying to help solve this piece.
Yeah. I guess the key metric for these folks is the uptime really. That's the currency of AI is what you're saying.
Exactly.
As engineers, you get to name your products whatever you want to. You can use Marvel characters, or these guys have chosen astrological names. We'll start with the Aries product, which is the retimer, which, you know, seems like a fairly straightforward product, yet it had profound benefits. Maybe, Nick, you can touch in the discussion of Aries on copper versus a lot of the talk about optical. We're gonna have that theme maybe throughout this conversation, but we can start there.
Yeah, sure. Aries is kind of our flagship solution that we cut our teeth with, you know, kinda two and a half years ago, designed across 100% of NVIDIA-based platforms as they ramped Hopper into high volume into the marketplace. Even till today, we ship Aries into every single major U.S. hyperscaler customer on the planet and every single merchant GPU provider on the planet as well. Very broadly adopted, very successful product line that's driving, you know, hundreds and hundreds of millions of dollars of revenue. It's gonna continue to grow going forward. As we just discussed, you know, as distances continue to increase and speeds go up, you're gonna need retimers in places you didn't need them before.
We expect attach rates over the long term to continue to increase. As you move from each subsequent generation of product to another, you're talking about PCI Express Gen 5 moving to Gen 6, ultimately moving to Gen 7, ultimately potentially moving to UALink. Each of those products are gonna be more performant, more complex, more capable than the previous generation and will command an ASP increase for those as well. This is definitely kind of the foothold of how we got our start. You know, effectively, Aries has been deployed across every major data center deployment for AI over the course of the last couple of years.
We're firmly entrenched from that standpoint to have COSMOS and Aries be widely deployed and battle-tested across the board. In terms of optics, we can, of course, kind of start to unpack that a little bit as we talk about Scorpio as well. You know, our viewpoint is that optics is gonna be a big TAM adder for us. You know, we have a very large $25 billion TAM to prosecute today just on the copper and electrical side alone. When you start to bring optical into the mix, you're gonna need additional components around the components that we already supply, and those components are not cheap. Probably a gating factor for massive adoption tomorrow.
as you need them, as distances get farther and further away, and you start to cluster and scale across multiple racks and not just one rack, we'll be definitely looking to sell, you know, optical engines around Scorpio to our XPU customers to bolt around their XPUs, and then potentially on a discrete basis, you know, different pieces of the optical engine to folks that wanna leverage our technology.
Now let's move on to the Scorpio product, which is the PCIe switch, as opposed to retimer alone, and that is ramping in volume revenue contribution significantly in the second half of this year. What is the critical element that a PCIe switch is needed for and provides? Because it's an AEC ASP uplift for you as well on a unit basis. All those thoughts would be interesting.
Yeah, for sure. The application and the use case spans well beyond what a retimer does. Of course, a retimer is very important, and it's helping to move signals back and forth with high fidelity and to keep systems up and running and be productive. When you start to move into a switching class of solution, you're talking about much bigger die size, more capabilities, more functionality. You're now the traffic cop that's sitting in between all these very important endpoints, whether you're talking about CPUs, GPUs, networking, memory, and storage. From a portfolio perspective, we've divide up Scorpio into two main buckets.
We have the Scorpio P-Series solutions that are for what I would call head node connectivity, being that traffic cop within a compute tray talking in between all those important endpoints. We have an X-Series solution for Scorpio as well that specifically interfaces just between GPUs or just between XPUs for a scale-up application. What we saw over the course of the last 12 m onths is Scorpio as the fastest and quickest growing product line in the history of the company, going from effectively $0 in the Q1 timeframe of 2025, and ultimately ended up shipping over $125 million worth of Scorpio P-Series solutions last year. A very quick growing product, and that was at primarily one customer on one big platform.
It kinda shows you the potential if you knock down a couple of more. Very positive and optimistic about how Scorpio P-Series is gonna continue to ramp. What we just discussed on our latest earnings call is that we expect to ramp Scorpio P-Series solutions at least two additional U.S. hyperscaler customers starting probably at the tail end of this year but more into 2027 across both merchant GPU platforms and internally developed XPU-based platforms. Then the Scorpio X-Series solution's gonna start layering on in the back half of the year for scale-up applications. We have a large customer that's going to leverage Scorpio X-Series for their rack-scale solution for scale-up, and that's gonna further, you know, kind of expand our content portfolio as well.
You know, beyond that customer, we've talked about 10+ additional engagements for X Series, of which maybe a couple more will start to ship at the very end of this year and kind of layer on top of that lead customer. Yeah, we're still very much in the early stages of this game. The market is just developing. You know, we're going from basically a market that was $0 last year to a market that could be as big as $10 billion or more by 2030.
That's great. Yeah, the second half ramp is for the X Series. The P-Series is already ramping then. Maybe you could talk a little bit about Taurus and the AEC market. The cabling or connectivity has become an important element, I guess, box to box. Maybe you can help us clarify. There's thoughts of bringing optical all the way to the chip, and there's some cabling involved there as well. Maybe you can help kind of clarify that more.
Yeah, sure. Similar to what I discussed on the PCI Express side, you have the same issues in Ethernet where distances are increasing and speeds are increasing as well. You need, you know, active solutions or active electrical cabling solutions that have retimers on each end of the cable that allow those signals to be passed over longer distances at high speed without losing their fidelity. You know, we had a great year with Taurus last year. Again, we just started ramping Taurus in the back half of 2024, so that was effectively zero at that point. You know, did over $100 million of Taurus revenue in 2025.
You know, getting up to, you know, being, you know, 15%+ percentage of total revenue for the company. That's again primarily one customer across a variety of platforms at that customer. But I think the catalyst there is as we transition in 2026 towards 800 Gb port switching speeds, that similar to PCI Express, speeds go up and you're gonna start needing cables and retimers in places you didn't need them in the prior generation. Our expectation is that we are going to expand beyond the lead customer in 2026 and ramp additional designs at new customers on 800 Gb starting in the back half of 2026, which should continue to drive some nice growth for that Taurus product line.
That's great. I don't recall you guys being terribly acquisitive, but you did find an asset you liked recently, aiXscale. You'll pronounce it. It is an entry into optical. Maybe you can talk about what it was about that acquisition that appealed to you and maybe kind of how it hints at your roadmap that kind of incorporates optical in that TAM. Yeah.
I think again, kind of, I mean, just to kind of set the stage a little bit, going back to IPO roadshow, I mean, we've always talked about, you know, looking out over the next 5+ years and expanding beyond our copper roots and getting into the optical arena. We started out with Aries in a very kind of narrow set of solutions, expanded to Leo memory controllers with bigger solutions, switch fabrics, which are now much bigger, higher-margin solutions. We provide chip-only solutions. We provide modules, add-in cards, system-level solutions. Ultimately we'll have a copper portfolio, and we'll have an optical portfolio as well. It's not something that we just kicked off in the last, you know, since the earnings call.
We've been, you know, planning this for quite some time and putting the pieces in place to be able to participate in this incremental market opportunity. As you look at the solution, we're focused on the scale-up side of optics, 'cause that's where we play today, and that's where we see the biggest TAM. We are very well-positioned with Scorpio X-Series as an anchor socket within these back-end topologies to be able to not only provide electrical and copper-based solutions but to optically enable Scorpio as well for these multi-rack applications where you'll need both electrical and optical solutions. There's a couple pieces of the puzzle that need to obviously be developed and solved. You know, we've been working on some of those organically already for the last 18 months.
You know, as we got close to customers and started really kind of diving into them, this connector technology, this bridge in between the optical engine and the actual fiber media itself is a very nuanced and critical piece of the puzzle, and not something that we felt like we were well-positioned to kind of organically develop. We've acqui-hired a small team in Germany called aiXscale, as you pronounced correctly, which few people do, that brought this connector technology, which is basically a small piece of glass that interfaces between the optical engine and the fiber that will be used within our CPO solutions down the road. It could be used in optical engines that bolt on to XPUs, and it can even be sold discreetly.
We talked about on the earnings call that we actually have a large customer qualifying just the standalone discrete glass coupling piece, which hopefully could drive some volume starting next year as well. Lots of work to be done here, but comfortable, you know, with the position that we have with Scorpio. We still have some runway here. I mean, I think, you know, we're talking 2028 and beyond before we start to see mass adoption. You know, tightly partnering with customers now to define what needs to take place, what needs to be launched at what time to be able to intercept this market. You know, it's gonna be a big TAM adder. This stuff's not cheap.
Everybody keeps saying, you know, "When does optical take over?" This is not a magical moment where optical takes everything because it is expensive, it is less reliable, it is more power-hungry. There are, you know, gating factors for mass adoption. In places where you absolutely need to span a certain distance that can't be achieved with copper, you'll do it with optical, and that'll start to phase into the, you know, kind of the mix over the course of the next couple of years.
A company like yours can give us insight. As you look at your kind of R&D kinda allocation, how much of it is an optical team and a copper team, and how much of it is really synergistic, where we overestimate how separate these are?
It actually is a large piece of leverage to synergistic, you know, R&D effort that's being, you know, taken advantage of on both sides of the aisle. Now, of course, there are nuanced pieces of the optical side of the house that will need their own resources and their own team to be able to support and develop that type of stuff. There's a lot of core competencies that we're gonna be able to leverage on the copper side specifically that are gonna be able to translate into optical. When we think about how we are gonna grow the team and, you know, we doubled our head count in 2025. We're looking to grow that significantly again in 2026. We've done two acqui-hires in the last two quarters to kind of accelerate that process.
You know, they're gonna be slotted into all these different projects and all these different opportunities. You know, like I said earlier, we have a good track record of execution now. Our customers are coming to us to help them solve the connectivity problems of tomorrow and the next day after. We need resources in place to be able to support and not overpromise and deliver against these big mandates. The team's gonna continue to grow. The OpEx is gonna continue to grow. We never really break out what portion is working on what, but I would say that there is a lot to leverage across both domains that we will take full advantage of.
That helped clarify a bit for the audience as kind of copper and optical are being talked about a lot, that there is more similarity and difference. I'll throw it out to the audience for questions to see if there are any for Nick here. Any questions from the audience? Yeah, back there.
Thank you, Nick. Quick question on your advice when you know, in terms of the speed right now, like in terms of how far the signal can reach, could you maybe talk to the range that, like, at what range from the bar can the signal be supported or
Yeah. That's probably more applicable to like the Aries and the Taurus business lines where we're doing the signal conditioning and reach extension use case. Today, using copper at current speeds could probably go up to 6 m-7 m . Beyond that starts to get tough, and that would be the kind of like breaking point where you'd have to start bending towards optical, roughly speaking.
Thanks. Any other questions from the audience? Maybe, Nick, while the audience is gathering its thoughts, you can talk about your work with the NVLink and the Fusion program. That seems like a unique sort of technology opportunity into the NVIDIA platform. Maybe you could tell us what that is.
Yeah. I mean, it's exciting for us. I mean, we've had a long-standing relationship with both NVIDIA and AWS, so very happy to be kind of invited to the party to be able to collaborate with these two juggernauts in the space. So for those who don't know, NVLink Fusion is effectively a program that NVIDIA's extended to folks that build their own accelerators or their own XPUs to be able to leverage the NVLink backplane scale-up topology that's, you kn ow, been ramped at scale, is proven and, you know, to be the best, you know, in the world at this point. So you know, there's a big opportunity there for guys like us to be able to provide the translation and be the bridge in between these two worlds, right? Because the XPUs don't speak NVLink.
They speak their own, you know, language, whether they're based on PCI Express or Ethernet, but they need to interface with the NVLink scale-up protocol. There's a lot of work being done on a three-way basis between ourselves, AWS and NVIDIA to be able to solve for what that translation looks like. That will be, you know, productized in a solution that sits next to every XPU that's able to take the language coming out of that XPU and then push it to an NVSwitch on the back end for scale-up. For us, it's a huge kind of incremental opportunity in terms of, you know, kind of revenue and a new program and a new socket.
I think even beyond that, our expectation and hope is that, you know, it's not gonna stop at just AWS. There'll be more folks looking to leverage NVLink Fusion to scale up their processors. We would love for NVIDIA to continue to be a matchmaker for us and to link us up and pull us into some incremental programs. We're starting to see some of that engagement today. So more to announce and, you know, kind of signposts to lay out over the next couple of quarters, but it's been pretty encouraging.
Any other questions from the audience? In the back again.
Can you talk about transitions in the PCIe world? Right now, we're still seeing 50%-60% PCIe retimers in the line. What's going on in the underlying transition of PCIe 5.0 to 6.0 and so on that affects your content and opportunities there?
If you go back even just as recent as 12 to 18 months ago, the PCIe retimer business was probably 90%-95% of total revenue. Exiting 2025, it was kind of down into the 55%-60% of revenue. That business still grew 70% last year, so you can do the math on how fast everything else was growing to be able to reduce the percentage of exposure on the PCIe retimer side. We continue to think PCIe retimers grow going forward, but we think Scorpio and some of the other businesses grow faster, so it should continue to come down as a percentage of total revenue. In terms of the transition from Gen5 to Gen6, still pretty early in the game.
Today, really the only PCIe 6.0 capable GPU in the market is Blackwell, being supplied by NVIDIA. That's driving 100% of our PCI Express 6.0 business. Undoubtedly, there will be incremental GPU suppliers, CPUs, ultimately all the peripherals that will kind of continue to get dragged on to the next generation. You know, I think crossover will probably not happen until 2027. Good thing for us is that, again, on a like for like generation over generation basis, we see about a 20% uplift on the ASP side. Even if units were to stay constant, which I don't think they will, I think they'll grow, we will see an uplift in the revenue profile just because of the ASP increase.
Well, it is nice to have a business growing 70% and decre ase in the mix. That's a-
Yeah, that's pretty nice.
a nice problem to have. With that, we'll thank Nick for his time and everybody and, thanks for coming, Nick.
Yeah. Thanks, Suji.
Appreciate it. Yep. Take care.