All right, good afternoon and welcome to JPMorgan's 53rd Annual Technology, Media, and Communications Conference. My name is Harlan Sur. I'm the Semiconductor and Semiconductor Capital Equipment Analyst for the firm. Very pleased to have the team from Astera Labs here with us today. Jitendra Mohan, Chief Executive Officer and Co-founder. Mike Tate, Chief Financial Officer, here with us today as well. The company leader in accelerated compute connectivity, networking, memory, and storage controller solutions. Their silicon and software are integrated into 90% of the world's AI compute servers and clusters. It's been a busy earnings season, so I've asked the team to maybe kick us off with a brief overview of the March quarter results, June quarter outlook, and then we'll go ahead and kick off the Q&A. Gentlemen, thank you for joining us this afternoon.
Thanks, Harlan. Good afternoon, everyone. Last week, we reported our Q1 earnings. We reported $159 million of revenue, which was up 13% sequentially and 144% from the prior year. We did 74.9% gross margins, and we beat our earnings by $0.05 with $0.33 of EPS on a non-GAAP basis. We continue to see very strong growth from our Aries and Taurus product lines. In particular, right now, we're seeing those product lines do very well with internally developed AI accelerator platforms for scale-up and scale-out connectivity. We guided Q2 to be up $170-$175 million of revenue, which is 7%-10% sequentially. We continue to expect good growth from Aries and Taurus, but we're also excited to be shipping at volume for the first time for Scorpio, which a lot of the incremental growth in Q2 was from our Scorpio product line.
Great. Perfect. Let's start off at a high level, sort of big picture. You know, it's great to look back over the past year post the team coming to the public markets in March of 2024. If we use this year as a proxy for the team's performance, right, you're currently targeted to drive about $700 million in revenues this year. A year ago, calendar 2025 revenue estimates were $400 million, right? You're tracking 70% better relative to 12 months ago. Earnings power this year, $1.35 per share versus a year ago where we were estimating calendar 2025 of $0.50 per share, tracking two and a half times better relative to what we thought a year ago.
It looks like the team will be driving, you know, at or close to $1 billion, you know, a year before our expectations relative to what we had a year ago. Significant upside, significant outperformance, both revenues, earnings powers over this period of time. You know, the team has brought a plethora of products to the market over the past year. As we look forward, right, from calendar 2024, we still anticipate the team driving a 35%-40% revenue CAGR, 45%-50% EPS CAGR. Jitendra, what are the strong drivers you think for the team over the next kind of, call it two to three years?
Yeah, thanks, Harlan. First of all, as Mike would say, we only guide one quarter out, so please keep your excitement in check. But we do have very exciting things going on. If you think about, you know, what led to our majority of our revenues in 2024, it was really the Aries retimer family as it was being rolled out with the Hopper platform from NVIDIA. So that enjoyed, you know, pretty significant growth in 2024 and even before that. But towards the second half of 2024, we started ramping our Aries products and Taurus products for scale-up and scale-out opportunity, and also in the form factor of an active electrical, actually smart cable module, which then gets assembled into an active electrical cable.
So that was our first foray, if you will, into scale-up networking, and it's going to be a significant growth factor even in the outer years. Looking at 2025, the first half was driven by ASIC platforms where we participate in scale-up as well as, you know, some amount of scale-out as well, and the growth being driven by Aries and Taurus smart cable modules. As we look into the second half of 2025, we see actually strong growth from our Scorpio family for scale-out and GPU connectivity, specifically with some of the Blackwell-based platforms as they get deployed in a customized form factor in our lead hyperscaler customer. That should drive good growth for the rest of 2025. As we look at 2026, our Scorpio X family becomes even more important.
As a reminder, Scorpio X family is what's used to have scale-up to connect GPUs to each other in a scale-up network. Over time, we expect that to become even a larger part of our overall portfolio because that is the largest TAM that we are targeting. Currently, the amount of revenues are just really starting from scratch. Scorpio P for the rest of this year and then Scorpio X, you know, layering in in 2026. Outside of the AI, we also have general purpose compute where our Leo platform will start to show some, you know, meaningful revenue in 2026. We are in advanced stages of qualification with our lead hyperscaler to implement memory expansion for a, you know, large database application. That should also set us up nicely for 2026.
Perfect. Let's talk about the overall environment, right? On the overall AI and data center spending environment, there have been some concerns on the CapEx spending momentum potentially peaking this year, maybe some AI compute digestion, right? You've had a lot of noise around efficiencies from model innovations like DeepSeek. We also saw more AI bans to China recently, and obviously tariff and trade concerns, right? On the flip side, strong ramp of your merchant GPU customers on the next-gen AI platform, strong new AI XPU ramps, new entrants into the ASIC XPU market, right? Since the beginning of this year, rewind to the beginning of this year, has anything changed meaningfully, positive or negative, on customer programs or the demand outlook for this year? More importantly, your confidence level on continued strong growth into next year?
Yeah, so from a customer ramp standpoint, nothing fundamentally has changed from our expectations that we had laid out. If anything, we are kind of pleasantly surprised at the amount of traction that we are getting for some of our new products in the scale-up side. You will hear the word scale-up, you know, many times today. Somebody keep count. Definitely that has changed. I think what has also meaningfully changed is our own confidence in execution.
Yeah.
We'd, you know, just started the qualification process with our Aries Gen six retimer as well as the Scorpio P Series for the Blackwell platform. That is going very well. We are, you know, large ways through that qualification process, and we expect to see some meaningful revenues even earlier than what we had anticipated. Just this quarter and the second quarter, we'll see some revenues from Scorpio P that continues on for the later part of the year as we discussed. Your other question upon CapEx spending this year versus next, I mean, we don't have the crystal ball, but I, you know, we still believe that we are in early phases of this AI rollout. There is still, I mean, scaling laws are pretty much intact, you know, especially with the new innovations like DeepSeek and so on.
If anything, there will be more performance gains to be had by adding more hardware. We expect that our hyperscaler customers will continue to invest. Our job really is to grow faster than how the CapEx is growing. That is what we are focused on. Regardless of which way the market is going, we grow faster than the market based on the increased content we have with our new products.
A dynamic that has continued to unfold in AI compute infrastructure is the adoption of custom ASIC AI XPU accelerators, right? Google TPU, Amazon Trainium, Meta's MTIA, Microsoft's Maya program. Additionally, we've seen new ASIC programs even just over the past kind of six to nine months. OpenAI, SoftBank, Arm are good examples of that. We estimate that from a unit XPU perspective, mix this year, ASIC XPUs to be 40%-45% of the overall unit mix with 55%-60% merchant GPU moving to 50%, you know, by calendar 2027. Our view is that the move to custom ASIC implementations is actually a net positive for the Astera team, more reliance on standard-based protocols for their scale-up and scale-out connectivity and networking requirements, right? I wanted to get the team's view on Astera's dollar content capture opportunity, merchant GPU versus sort of ASIC.
Yeah, so I would say, first of all, we like to play nice with both GPUs as well as ASIC. We like to be the kind of the Switzerland of connectivity, so to speak. We are supporting the Blackwell rollout very well. As the ASICs rollout, we will continue to support that. The meaningful difference in content comes based on where we are playing in these complex AI systems. If we play on scale-out side, so if we play only on scale-out, then that's sort of the baseline of content that we get, which could be an Aries retimer or Taurus Ethernet smart cable modules or even a Scorpio P Series. When we also play in the scale-up side, that's where the content, you know, increases pretty significantly because in addition to having, let's say, Aries smart cable modules, we now also have Scorpio X devices.
Because of the nature of scale-up connectivity, there are more links that are running faster. It's really a much more important part of the overall, you know, AI infrastructure or AI rack. That allows our content to go up meaningfully into hundreds of dollars on a per GPU basis, which is significantly higher than, you know, where we used to be just a year ago with effectively, you know, one or two retimers per GPU.
Would you agree, you guys, like I said, ship into, you know, 90% of all the big AI clusters out there? Are we kind of in the ballpark, you know, this year, kind of 40%-45% sort of AI, ASIC, XPU mix, kind of 50%-55% share merchant GPU? Are we kind of in that ballpark?
It sounds about right. You know, you probably have a much better idea of what that mix is than us. We, like I said, we support both of them. Over time, we do see that the hyperscalers will value their own internal developments perhaps more than depending upon external platforms. There are internal use cases for which the internally developed ASICs are plenty good. And then there are use cases where they need to offer, you know, these GPUs to outside customers where the platforms, the GPU platforms are better suited today, but that might also change over time.
Before I start talking about some of the product dynamics, does anybody have any questions? If you do have a question, feel free to raise your hand and we'll get a mic over to you. Any questions?
Could you help us understand as inference becomes a bigger and bigger portion of this industry, how does that play into your business mix? Any elaboration on your last point? You mentioned that hyperscalers value more internal development. That would suggest that they're leaning more towards pushing ASICs higher and higher. How do you sort of view that world? Thank you.
Maybe to answer the second question first, hyperscalers have a good view of what their customers want, what workloads they want to run, whether they're internal or external. Overall, they'll have better control of the supply chain with their internal solutions. It's no surprise that they will value those products more. The commercially available GPUs, the third-party GPUs, are better today. You know, the products that are coming out of NVIDIA, Hopper, and now Blackwell are just, you know, state of the art. It'll be a while before these guys catch up. I think at some point in time, they will catch up. Now, to your first question on inference, inference is of course becoming a lot more important today and requiring actually much more compute than previously thought.
As we have these chain-of-thought models or reasoning models, the amount of compute required for inference is now 10X of what it used to be just a few months ago. With that, what we are seeing is the basic unit of compute is becoming a rack-level AI infrastructure. Whether you're doing training or you're doing inference, you need a whole rack. Given that our opportunity is at the rack level and more opportunity at the rack level because of just the complexity of the system and how far these signals need to run, it is overall a good thing for us. There are some customized inference solutions that can be deployed. In fact, we released our MGX reference board, which you can use for small-scale inference. Having said that, though, I think most people will try to use the same hardware for both inference and training.
We have content in both of those. To the first order, it doesn't matter for us.
Can you elaborate if the inferencing is going to be mainly done on GPUs or is there a different version, maybe CPUs can also do it? Any color there?
Yeah, different opinions. Mostly people who have CPUs seem to think that they can do inference on CPU, and certainly they can. The reality is people are already deploying GPUs. If you already have a GPU cluster deployed, you know, maybe you will deploy a newer cluster for doing training and use the older one for doing inference. We believe that inference will also continue to be on GPUs, especially as you have more GPUs available as well as the inference requirements go up.
Let's turn to your product category. On your flagship Aries smart retimer product, still the largest part of your revenue mix, we estimate about 65%-70% of your overall revenues. All of it, you know, up until recently has been Gen five PCIe retimers, 90%+ market share. Strong growth here in the first half, primarily driven by continued adoption of merchant GPU as well as ASICs, right? First half of the year. You have qualified your next generation PCIe Gen six retimers starting to ramp this quarter to support the ramp of, like you said, some of the custom rack scale architectures of the number one GPU supplier in the market. Additionally, you have continued step-up of new ASIC programs like Google's next-generation TPU ramping in the second half, still using Gen five.
How do you see the Gen five retimer mix versus Gen six exiting this year and, you know, in and through calendar 2026? And, you know, what are the custom ASIC XPU platforms or when are the custom ASIC platforms going to start to migrate to Gen six?
Yeah, so the Gen five still has a lot of growth ahead of it. What we saw last year was a lot of the adoption for Gen five for AI servers on the scale-out networking, especially third-party GPU platforms. In Q3, we started to do the scale-up. The scale-up is a much bigger unit opportunity because of all the connections that need to be made. Also, we service that one as a module, so it has a higher ASP. They use active electrical cables for the scale-up connectivity. That is still building momentum in 2025. It still has a lot of legs to it. Within that, you still have general-purpose servers adopting Gen five retimers as well. Now, that being said, you know, we're the first with Gen six and we're getting good traction out there. The ecosystem's still evolving.
You have the Blackwell platforms, the first Gen six capable GPU, and we have designs with that. That will build a lot of momentum in 2026. It will layer growth on top of a growing Gen five mix.
Perfect. You know, like I said, 90% share of the retimer market, fast-growing markets will always attract competitors. We've seen competitors introducing Gen five. We've seen competitors introducing Gen six retimer solutions, right? Investors are concerned that, you know, customers do want diversification of suppliers. I'm wondering, especially in AI compute where the cadence of new product introductions is so aggressive, right? Performance reliability is such a top priority, right? I'm wondering if supplier diversification is actually not that high of a priority, right? As long as the current supplier has a strong track record of execution in delivering the right products at the right time, from my perspective, there's just too much risk to bring on new suppliers, especially with unproven software. You've got, like I said, 85%-90% share of Gen five.
Given your design win visibility, does the team anticipate maintaining continued strong share into the Gen six adoption curve?
Yeah, so Harlan, a good point. I will agree with you on sort of how quickly this market moves and what are the main care about that our customers have, which are first and foremost, can you make my system work? You need to have the right level of performance. Can you make the system work robustly, meaning, you know, days in and days out, you're able to maintain that performance, have the right diagnostics capability, and so on to understand if things go wrong, why are they going wrong, where they're going wrong so they can take corrective measures. We make all of this possible through our Cosmos software, which is already deployed at our customers. That does make our solutions very sticky. That's what we need to continue to do. We have to always stay paranoid in terms of competition.
Competitors will come. It's a large market. Typically, when we start out, we make the assumption that we will get our kind of equitable share of the market. Over time, as time evolves and our solution gets mature, while the competitors are still not there, we are in a very good position to kind of repeat the story that we had with PCIe Gen five, also with PCIe Gen six. I do want to mention one other dynamic that is at play, which is it's no longer about kind of piece components, my retimer versus your retimer. Not only do you need to have the software, but you need to have that portfolio. Now if you look at PCIe Gen six, you know, we have PCIe Aries retimers. We will have the active electrical cables.
Just like we have for Gen five, we have the switches for Gen six. Now we also introduce the gearbox. We have a full portfolio of product. Anybody who's designing a PCIe Gen six-based system is likely to come to us because we offer the full solution, all the different chips as well as the software that goes with it.
Yeah, speaking about, you know, the portfolio being a significant differentiator, the team launched its Scorpio family of fabric switches last year. It's a $5 billion market opportunity over the next few years, taking your total addressable market opportunity, I think, to about like something over $12 billion over the next few years. You've already achieved design win success for both the P Series and X Series of Scorpio, which is expected to drive about 10% of revenues for 2025. Assuming the average ASP for these products is several hundred dollars, I mean, this would imply that the team is only going to ship like 250,000-300,000 units in the second half of this year. Assuming a one-to-one attached switch to GPU, this is only 2%-3% of the market, right? Huge market opportunity still in front of the team, right?
Could you maybe comment on the number of customer engagements and maybe the breadth of deployment of your solutions?
Yeah, I'm happy to do that. First of all, I would say that, you know, even though we are shipping that number of units or close to that, they say that in and of itself is a huge achievement because the Scorpio family will constitute 10% of our revenue, and we just started shipping this year. I think kudos to the team. Some of you might be listening to this webcast. The other point is about the market opportunity, and it is indeed huge.
One of the things that I've been very pleasantly surprised over the last three months is just the number of engagements that we have with our Scorpio family, both on the P, but even more impressively on the X family, where Scorpio has really become this anchor socket where customers start designing their AI racks around the switch as well as the connectivity components that we have for PCI Express. Not only do we get these conversations going for the switch and what the switch brings, but we can also pull in other components to deliver an overall connectivity solution at the rack level. While we are doing this at PCI Express Gen six today, this dovetails nicely to the future where we might want to do the same thing for, for example, UAL.
Right. Yeah, we'll get to UAL in a second, but great to see the team getting qualification, ramping your PCIe Gen six portfolio, which includes Scorpio P. That being said, there are still a lot more custom AI ASIC programs coming to the market that are still going to be Gen five, right? They want to get to market quickly. They're familiar with your retimer. They're familiar with your Cosmos software solution. Switching performance from your products is very strong. Has the team also been able to get design win traction on your PCIe Gen five Scorpio P series-based switching solutions as well for these new ASIC platforms, or are they just future-proofing their platforms by adopting your Gen six platform?
So anybody who's doing a new design is going to look at Gen six as sort of the way to go because, you know, as Mike mentioned, only Blackwell is Gen six capable GPU out there, but it's fair to assume that all new GPUs that are going to come out will support Gen six. Definitely that's a very strong driver to consider PCIe Gen six-based solutions from the get-go. Any solutions that are already using existing PCIe switches are unlikely to transition because, just like you said earlier, these things move too fast, and it's not worth it to enable a second source there. You know, that's not of super importance to us, and we never counted on that.
There are some opportunities kind of in between where there's a new design being done at Gen five using Gen five accelerators, which requires a switch. We are very actively engaged in a few of those, a few important ones for us. I wouldn't quite call it a design win yet because design win is a very strict definition for us, but very strong engagements, you know, board schematics, layouts being done, and so on. Stay tuned.
Let's move on to the Scorpio X. As you mentioned, that's potentially the bigger opportunity here, right? XPU to XPU scale-up switching and fabric solution, right? We've articulated all of the XPU ASIC programs coming to the market, either in the market or coming to the market, right? All of these programs are rack scale. Customers really need help with the rack scale connectivity architecture. Perfect entry point for Astera's Scorpio X to enable this XPU to XPU connectivity. Given the urgency of these programs, I mean, will you have some customers that will be deploying Scorpio X in production rack scale deployments second half of this year?
Yeah, so I think if you look at a couple of years out, the vision that we have at Astera Labs is to be providing the connectivity infrastructure at the rack level, which includes the switch, which includes the retimers, active electrical cables, or whatever components that might be needed. You know, it's copper today, maybe it's going to be optical in the future. That is our vision, and we will continue to support our customers towards that vision even starting now. To specifically your question on Scorpio X, we are already in pre-production. The qualification work has started. It'll continue through the rest of the year where we start to achieve, you know, production volumes for the Scorpio X family towards the end of the year. Really, the big opportunity for Scorpio X is going to be in 2026 when it gets deployed at the rack level.
As you engage with customers on Scorpio X, you know, are most of them choosing a PCIe-like protocol, or are they choosing proprietary protocol, or the new industry standard-based UALink protocol? Maybe if you can talk a little bit more about the industry standard UALink initiative and the benefit it can bring versus proprietary or Ethernet-based solutions.
Yeah, so today it's a diverse ecosystem. Everybody's using, you know, something that's proprietary, like for example, NVLink, or something that might be already existing, which is, for example, PCI Express or PCI Express-like protocols, but also in some cases Ethernet because just Ethernet happens to be available, a switch happens to be available today for Ethernet. It's a fragmented space today, and that's why we feel very hopeful that UAL, which is a new standard, will be sort of the one unifying standard for rack-level connectivity. Now, UALink or Ultra Accelerator Link is an industry-wide consortium, and Astera Labs is kind of proud to be a promoter member of that board. What UALink does is it builds this protocol from the ground up for AI. The whole protocol is targeted for scale-up networks connecting AI accelerators, ASICs, or GPUs to each other.
What they did is they started with PCIe because PCIe natively has the capability to connect GPUs together and make them look like one large GPU. They started with that. The memory semantics, the lossless nature of PCIe is all carried over into UALink, and then they throw away anything that's not necessary. You even streamline PCIe protocol as part of UAL, and then they couple it with the fastest SerDes available from Ethernet. You're getting the best of both worlds with UAL, and just even more importantly, you're getting a very good ecosystem where there will be many vendors providing solutions to the space and customers looking to adopt it in a truly open ecosystem.
Now, if you contrast that with UEC, UEC also will eventually do the same thing, but they're starting with an Ethernet protocol, which was not meant for scale-up, and they're borrowing many of the features that are present in UALink and in PCIe Express into UEC. By the time it's all said and done, you know, maybe it is Ethernet in name, but a lot of new features that make it not like Ethernet. We will see. I think the biggest difference, though, is again going to be in the ecosystem. UEC is driven by one large company today, even though it's an open ecosystem. Over time, how it evolves remains to be seen, but I do think that hyperscaler customers are looking for a truly diverse open ecosystem for their rack scale connectivity solutions.
Any questions from the audience? We got one right over here.
Hey, Jitendra, thanks for taking the question. I just wanted to follow up on the comments you made about the move to rack scale being a real positive for the company. Does that also apply to NVIDIA-based racks?
Yeah, so we are very happy with the progress that we've made with the NVIDIA platform, Blackwell in particular. As you said before, Hopper to Blackwell is a positive factor for us. Our content grows up very significantly. However, I think your question is specifically to NVIDIA reference designs that use Blackwell. In those reference designs, we have minimal content as we move forward from the Hopper generation to Blackwell, but that is much more than offset by the customized designs that the hyperscalers are doing, using the same great technology that Blackwell offers, and then customizing it for deployment into their own data centers. We get content for our retimers there. We get content for our Scorpio devices. On the whole, it's a big positive for us.
Thank you.
Software, let's talk about software. Software is a key part of your silicon solution. It's a key part of actually the differentiation, right? In fact, your Cosmos software and management platform is deeply integrated into your customer's data center software management stacks, right? This is a very sticky part of the solution. Cosmos integrated in your chips enables things like real-time diagnostics, telemetry, debugging, right? Ultimately enabling faster time to market, faster bring-up of compute clusters. As you bring new solutions to the market like Scorpio P and Scorpio X series products and continue to expand your product line, is the team continuing to leverage its Cosmos software? Is this still a major competitive advantage for the team?
Oh, absolutely, 100%. In fact, I'll make an anecdote. I talked to somebody, one of the hyperscaler customers, and said, "What's easier, setting up a 100K cluster or continuing to run a 100K cluster?" They didn't even blink. That running a 100K cluster is far more difficult than setting one up. While everything you said is very important, Harlan, in how you bring this up and how you debug this, the ability to provide just really intense, detailed diagnostics on what's going on on this cluster is super critical. With our Aries products, we are able to provide one level of diagnostics and link health monitoring, environmental sensing, and whatnot.
Now with Scorpio in the mix, you know, these two products are better together in that not only can you get link-level diagnostics, you can also figure out where is congestion in the system, you know, how full the buffers are inside of the switch. Together, we provide even more diagnostics all under the same Cosmos umbrella. It just makes the solution more sticky. We are investing a lot in our software. Our software team is already bigger than our design team.
Let's talk about your Taurus product family of AEC networking connectivity. 800 gig Ethernet connectivity, obviously strong adoption with current AI scale-out networking connectivity, right? We're now actually starting to see the adoption curve of 800 gig within general purpose cloud and data center footprint starting to pick up, right? Very early days, but starting to see some momentum, you know, 200 gig, 400 gig within the data center, you know, starting to maybe lose a little bit of steam, right? I guess the question for the team is, are you still on track to shipping your 800 gig AEC solution in volume production this year? Will it be with multiple customers?
Yeah, so just for some background, our current solution is based on 50 Gbps per lane technology. It gets deployed as 200 gig and 400 gig modules. That is largely with one hyperscaler who's chosen to deploy this active electrical cable technology. However, within that one hyperscaler, it is actually quite diversified, both on AI platform as well as general purpose compute platform, different form factors, straight cable, X cable, Y cable, et cetera. Now, as we go from 50 gig per lane to 100 Gbps per lane, you get to 800 gig modules, and we are fully ready to get those deployed towards the end of the year. Now what will happen with 800 gig is there will be some fragmentation still in the market.
Some customers will choose to deploy it with passive copper cables, you know, thicker cables, maybe they are shorter, or sometimes you need to go to the end of the row, so you need optical solutions because copper just doesn't reach it. There will be that sweet spot in the middle where active electrical cable solutions will be applicable, and we are 100% ready to address those sockets.
To your point, you know, there's an ongoing sort of market debate regarding the choice between AEC versus ACC solutions. As you engage with your customers, how are they evaluating these two options, and how do you envision these technologies sort of evolving over time?
The difference between AEC and ACC is the following. AEC uses a retimer, which is a component that completely cleans up the signal. It receives signals, it understands it, and retransmits a completely clean copy of that. ACC uses something a lot simpler called a redriver, which is effectively an amplifier. It takes in the signal that's coming in, amplifies it, and amplifies all the noise and other imperfections that are coming with the signal and sends it out on the other side. It is kind of not as powerful as a conditioning technique. However, the advantage of redrivers is that they are low power. The simplest thing in my mind, at least, if you can afford the power, you'll always go with a retimer solution because it provides you with all of the diagnostics and the performance that you need.
If power is a constraint, then redriver solutions are more applicable, and typically they work when you control both sides of the equation. So if you control both the source and the destination, with difficulty, you can make a redriver work. To the extent that our customers want it, you know, we will build one ourselves as well. They are much easier to build than a retimer class solution.
On the financial, strong gross margins, you know, the team drove strong gross margins. 74% is the guidance for the June quarter. That's tracking 400 basis points above your long-term target model of 70%. While you plan to increase the mix of hardware products over the next several years, which could exert some pressure on gross margins, you also actually have a whole bunch of new products that we just talked about, right? Like the Aries six solution, the Scorpio products that are beginning to ramp. I believe Scorpio does carry a very high gross margin profile given the performance complexity of the chipsets and the software attached. Scorpio obviously is going to continue to scale higher. Does the team have the potential to sustain margins above the long-term target of 70%?
Yeah, we still like people to expect the long-term gross margin target of 70%. You know, as we diversify into multiple product lines, we'll have a wider range of margins per product line as well. Just given all the opportunities we have, we think over time we'll trend towards 70% still.
On operating expenses, I mean, they've been biasing higher as you continue to maintain a very, very strong product cadence and expand your networking and connectivity portfolio. How should we think about the trajectory of your operating expenses as we think about it over a multi-year period? As the team continues to build scale, how do we think about your OpEx growth relative to your revenue growth?
We're still aggressively investing in R&D and expect OpEx to grow, but with the revenue growth ahead of us, we do expect to see leverage over the long term and deliver, you know, 40% operating margins in our target model.
Jitendra, Mike, thank you very much for your participation. Look forward to monitoring the progress of the team this year.
Thanks, Harlan.
Thank you.
Great job. Thank you.