Great. Welcome back, everybody. I'm Joe Moore, Morgan Stanley Semiconductor Research. Very happy to have today the management team of Lattice Semiconductor, Ford Tamer, Lorenzo Flores. I don't cover the company, so if I ask questions that I shouldn't ask, just answer the question I should have asked. It's fine. No, no harm done. We've spent a lot of time in the FPGA space, and I, and I like the Lattice story a lot. Ford, I think you wanted to start with some opening kind of commentary, or you wanna just go straight to beginning?
Sure. If you'd like me to. First, thank you for having us, Joe, and, for those of you who don't know, it's Joe's birthday, so happy birthday.
That's getting a lot of advertising, but thank you.
Excited to be here and see some familiar faces. You know, I'll take you through I've been at Lattice now for 17 months. Let me take you through the past 17 months and then fast-forward to today. Joined 17 months when the inventory situation at the time was six months of inventory in the channel. The revenue had decreased from $730 million to $500 million in 2024. We had to cut costs. We took a 14% restructuring. At the time, we told you what we said what we're gonna do. We said, at the time, we said we're gonna grow 25% in 2026 compared to 2025.
We're gonna drive inventory in the channel down to 3 months by mid 2025, and we'll keep bringing this down. We said we're gonna be growing in 2026 big time in data center AI and Nexus, which is a small range FPGA product line. In 2027 is your physical AI and Avant, our mid-range. We basically did exactly what we said. End of 2025, our inventory in the channel was down to three months. We're back to growth, and consensus is now above the 25% number for 2026 growth. Margins still very strong for both gross margin, operating profit, EBITDA, and free cash flow. I'm very excited about where the future can take us. Future is bright.
Great. thank you. I guess you've described Lattice as the everywhere companionship. Can you describe what that means? Can you describe the philosophy underlying that?
Yeah. Yeah. The realization when I came in is, look, we don't wanna be the chip. The FPGA in the past has been wanting to be the chip, the NIC, the wireless processing, you know, the near edge AI. You know, we wanna be partners to all the major chips that are driving the growth in both data center AI and physical AI. The MVP, the most valuable player, is the GPU or the AI accelerator or the network switch or the NIC or the, you know, board management controller, the microprocessor, microcontroller or the different sensors. We're a companionship to all of these, all of these chips. We call the GPU the MVP.
MVP is a very important player, the big player on the field, but they cannot win a game, let alone a championship, without a team, and we're that team. We're that team providing all these functions, to the tunes of, you know, it used to be tens of FPGA per data center rack to now hundreds of FPGA per data center rack, all the way from booting, power sequencing, control, management, IO expansion, security, power and cooling, et cetera, to the tunes of tens of FPGA in a humanoid providing the major, you know, control and connectivity to the various image sensor, LiDAR, radar and the tens of motor controller there are in these humanoids, robotaxi, and various physical AI applications. We do this.
Being a companionship doesn't mean you're—w hen we first started calling ourself companionship, our team was worried, "Oh, that means we're weak." Well, we're not weak if we're a companionship. We're actually, we provide a very powerful function because we are Switzerland. We provide it across all the various vendors. We provide it across all these functions that I just took you through. We're everywhere. We provide this companionship function anywhere from the comms and compute market that are growing the fastest. This year in 2026, it'll be over 60% of our revenue. In industrial, automotive, and all these new physical AI emerging spaces, humanoid, robotaxi, medical, aerospace, defense, autonomous vehicle, and even AR/VR in the consumer side. We do it across everywhere as well in every application. If you look at application, we're not selling FPGAs anymore.
We migrated the company to be a solution provider. We sell you security, and it elevates the discussion to a different level. We go in there, and we talk about security. We talk about rack management. We talk about power and cooling. We talk about PQC. PQC, Post-Quantum Cryptography, which is protecting the assets against quantum decrypting in the future. We do it near every image sensor, every LiDAR, every radar, every ultraviolet, every infrared image sensing. We do it across a variety of other industrial sensor like, you know, temperature, pressure, et cetera. We've started recently to have these partnerships that are dragging us into these high volume application.
A partnership we discussed is a partnership we have with NVIDIA, where we have a solution that we support NVIDIA with called Holoscan, where we can feed, let's say, 15 different camera feeds into our FPGA, some pre-processing source in there that pre-process the data before feeding it to Orin and Thor, making them more efficient. We've announced a partnership with NXP, where, you know, we had a joint meeting of our customers in Europe in January, and the VP strategy from NXP flew from Austin to Munich just to be with us, be the keynote speaker before flying back. We're putting a partnership where you can have big FPGAs, small micros, big micros, small FPGA, medium, it doesn't matter, providing the best solution to the customer.
We're announcing many partnership with image sensor and LiDAR radar, where we make these solution easier to adopt and easier to install for customers. That's the companionship story, and across every function, across every market, across every application, everywhere, to benefit the customer and eventually the investors.
That's great. Thank you. NVIDIA, NXP probably doing pretty good these days.
Yeah.
I hear about this NVIDIA company.
Yeah.
The attach rates for FPGAs per server, I think you are expected to move above $3 with ASPs above $4. Can you talk about the longer-term trajectory there and, you know, what ultimately caps that number?
What Joe is referring to is the fact that our revenue is growing faster than CapEx. Last year and this year, CapEx in cloud, CapEx for data center is growing 50%-60%. Last year, we grew our server business at 85%. We grew our communication business at 60%. Growing revenue faster than CapEx. We're gonna continue to do so this year. What's driving this is the following. Number one, the number of servers growing is growing pretty fast. Number two, AI as a percent of revenue is growing fast for us. We went from 10% in 2024, AI being 10% of unit to last year being 12% of unit, and this year, 15% of unit being AI servers. Just to put things in perspective, AI drives a larger percent of revenue.
Last year, on 10% of unit for revenue, we drove 20% of the dollar revenue, hence providing higher dollar per unit. Number three, attach rate, we keep finding the new applications. Security has been a very strong application for us w here we started with Root of Trust, we added Attestation, we're adding Post-Quantum Cryptography. we keep growing in the security. We found this new application on cooling, for example, that we recently adopted, and that's again, growing very fast. The attach rate across applications are growing. The needs are continuing to grow, hence bigger FPGA and hence bigger ASP. Overall, you could see, you put all these together, the multiplicative and revenue grows faster than CapEx, which is already growing at pretty fast rate. That's what's driving us.
Okay. Yeah, I guess, how do you think about that longer-term trajectory than with hyperscaler CapEx? This year, we're sort of looking at the hyperscalers cash neutral. I'm pretty bullish spending will still be up a lot next year. How do you think about that dynamic affecting you guys?
Yeah. Look, we're booked for the year and through mid-next year. I mean, bookings are extremely strong, and we seem to be strong for the foreseeable future. We don't see a slowdown right now. We're up into the right. This year has been the year of data center AI for us. Next year is gonna be the year of physical AI, where you've got a whole bunch of these new application I just described that go to market. You know, it's interesting, some of the discussion we've had at this conference, Joe, is can humanoid be bigger than data center? Because people are saying, "Look, we've got as much content in a humanoid as we have in a rack."
Mm-hmm.
You know, 15 million server, 3 million racks, you know, the numbers for humanoid could be anywhere above that. You know, I was listening to the Figure AI CEO on the Moonshot podcast. He's predicting more than one humanoid per person, so that'd be 8 billion humanoid.
Yeah.
Whatever the number, even if we're wrong by two orders magnitude, it's still $80 million is still a pretty big number. All of the new physical AI applications coming to market next year could be super interesting. We're seeing real productivity gains. I mean, us at Lattice are using Claude, you know, for our own development. We're seeing tremendous productivity gains. We have all of the tools, Cursor or Claude. Claude, over the past few months, has been amazing. You're seeing this, accelerating.
Yeah.
These new applications are coming to market that really the rate of change accelerating in a pretty big way.
I think the irony of all of this is that your two biggest competitors were bought by compute companies with the idea of moving FPGAs more into the compute space. Now you're having all this success. I mean, is that in a way because of that, because they're focused on that central processor and trying to use FPGAs for that?
Look, I mean, our success is directly related to where the sweet spot for FPGA is. The sweet spot is in the low to mid-range FPGA, non-SoC. That's exactly what we do. We're single-focused on this single mission and where we're gonna take the company in the solution space. You're seeing us invest in solutions. We've got people that understand these solutions across all these markets and solutions where we're gonna take the company and grow it in a big way.
When you talk about solutions rather than FPGAs, I mean, it's still at its core a programmable FPGA, right?
Absolutely. Absolutely.
O rienting around those solutions.
Absolutely. The—
Yeah.
—at the core, we're still providing future-proof. Let me give you why FPGA. Security. The algorithms are changing very fast. You want to be able to change, like, you know, the old algorithm, like AES256, when quantum shows up, it's taking like a gun to paper. It's like no protection whatsoever. You've got a huge imperative to move to this Post-Quantum Cryptography, like, today.
Mm-hmm.
These algorithms for Post-Quantum Cryptography, PQC, are changing still very fast. We got these Lattice-based cryptography, no pun intended. These are the name of the algorithms. These are 3D algorithms. But they're changing. Having the ability to change these algorithms in the field for future-proof is very important. When you talk about humanoid application and physical AI, FPGA is great because you need low latency. If the humanoid needs to stop, it can stop very fast. You need to have parallelism for fast performance. You need to have accuracy for applications such screwdriving, bin picking, welding, you need high accuracy. You need connectivity. We have EtherCAT that could connect all these motors and vision sensors together for perfect synchronization. You need future-proofing again.
The FPGA provides all these functions. For all these different various application, FPGAs are critical, but you also want the low power FPGA that's gonna be small size and cost-effective and fast boot. By the way, you wanna go talk solution. You don't wanna talk FPGA, you wanna talk security, you wanna talk vision, you wanna talk motor control.
Okay. Very helpful. Thank you. Lead times are stretching out. Can you talk about, you know, what you're seeing there and how that's impacting your business, and any opportunities around that to see higher pricing, anything like that?
I'll hand it.
The lead times are indeed stretching out for us and for everybody else, obviously reflecting the strength of demand.
Mm-hmm.
What we've been doing with our customers is working with them directly, and with our channel partners to say, you know, "When do you really need product?" Right? What we've done with the extended lead times is try to work with the customers to get the bookings through the year. As Ford said, we're seeing them in through the middle of next year, past in some cases, so that we can stage our supply chain and meet the customers and demand. The way it's helping our business is in planning.
Mm-hmm.
We can, you know, we can actually deliver the parts. The visibility that we now get from a tighter channel, right? We've dropped the supply in the channel down to where we have much more direct visibility and are able to target supply to end customers that really need it. That's really helping our customer relationships. Then, you know, you know, on the pricing side, we have a different approach than some other people in the industry. You know, we have taken a very long-term view of our strategic customer relationships. We intend to be with our customers for a long time.
What we're working with them on is, you know, we've had a consistent, you know, mature product price increase type of program, and we're keeping a balance between potential cost upside in the supply chain with how we're approaching pricing. We're comfortable that lands us in the gross margin we've guided through the year. We're, you know, we're definitely not doing a short-term, you know, pricing move to optimize, you know, a margin in any quarter. We're looking all the way through this demand cycle and in through the product ramps that Ford described in the opportunities we see in front of us.
Can you talk about other puts and takes around gross margin? It seems like product mix should be favorable for you know, you said the pricing isn't changing, but it seems like other factors may.
It's a very dynamic situation. I didn't say pricing wasn't changing.
Yeah.
I just said.
A li—
We're taking—
Yeah.
— balanced approach and long-term approach to it. The—o bviously, we're on the lookout and constantly in communication with our supply chain on where we—s o that we're able to sense where price increases from them will come from. Trying to manage our cost. We took steps starting about six months ago, actually, on both the cost side and on the supply availability side because our head of operations saw some of this coming. We're trying to get ahead of that. We still have to be realistic that it's a factor in our industry, right?
Mm-hmm.
The supply chain constraints and possible cost increases. You know, we started to manage that. We do have the new products. We do, you know, we do intrinsically have a high-value product, right? And our, and our customers, do wanna work with us and use our product because of the, you know, low power, high performance characteristics of it and the suitability in all the applications Ford stated. We are, you know, trying to manage, you know, that side of the equation on the pricing side to keep it in balance. We do have, you know, we do have great product.
Yeah.
A nd we do have strong demand. You know, we're all very aware of the possible supply chain issues, so.
Great. On that new product revenue, I think you're talking about a mid-20% of total revenue this year. Can you talk about the progress of that and then the Nexus versus Avant—
Yeah.
—timing?
We think that 20% will get to the mid-20% this year. Good, good progress, and that growth could continue to grow. Last year, we grew 70% year-on-year. This year, we think we grow 60% or more year-on-year on the new product. Nexus is really 2026, with Avant really taking off in 2027.
Okay. Okay. Um, w ill you—
There's a question there.
Okay. Go ahead.
Yeah. I'm sorry. I'm just sort of—i f you were to explain to somebody not very bright like myself, how does an FPGA fit into the broader hardware world? I know you keep making the point that there's a broader role for it in this new AI GPU, et cetera, universe, but how does it slot in and why is the opportunity so much larger relative to where we were in the past?
No, that's a very good question. At a high level, you know, if I were to draw a continuum, right? The most flexibility is a processor, right? You'd have, you know, microprocessor, microcontroller can give you the most flexibility programming in higher-level languages, see, you know, TensorFlow, PyTorch, all the way to the other end of the scale, an ASIC that is gonna be fit for that purpose. It's gonna be a GPU, a switch, a NIC that is gonna require hundreds of million dollars to produce but the best power, the best performance for that application, right? That's the continuum, right? We're in the middle. Think of an FPGA being hardware-like performance, right? But the issue we have is you have to program it at hardware-like levels.
We don't have the same appeal today just because the microprocessor, microcontroller, you can program in C. You've got a much many more people that can program in this micro versus FPGA. If you look at these applications, such as, you know, the application at data center we're providing to the humanoids I talked about, the robotaxis, aerospace and defense, medical, all the applications require hardware-like performance. They require the performance that we can provide. You don't wanna go spend hundreds of million dollars to develop an ASIC, which, you know, if you've got an application like that's really high volume, such as an AI accelerator, switch, NIC, board management controller, may as well go spend the tens of million, hundreds of millions dollars to do an ASIC.
We can provide you not quite the same performance as an ASIC, but quite a good performance, much better than a microcontroller. We provide you performance, we provide you latency, we provide you deterministic, provide you accuracy, we provide you much lower power, we provide you know, many more benefit than you'd have in a microprocessor, right? We're in this in-between. Why is today FPGA is becoming more important, okay? The cost of developing these ASIC continues to grow. The time it takes and the size of the team it takes to develop these ASIC continues to grow.
You've got applications such as the one I just described that require that hardware performance, that require this unprecedented. You need this performance, you need this power, and yet you don't have $10 million, $100 million. FPGA is a great solution, right? As the cost of ASIC continues to grow, the cost of putting this function on a 2 nm, 3 nm advanced node continues to grow. I look at this AI chip and this big chip that takes over the whole reticle. I've got as much logic I could put on this chip. Would it make sense to offload this power sequencing, this control, this boot to it? We're in 65 nm, 28 nm.
We're in these back processes that make this function much more cost-effective, you know. It makes more sense for me to offload this function to this FPGA. There's more and more pressure on system time to market. NVIDIA, AMD are driving to one year between different generation of GPUs. Elon Musk comes in and says, "I want to go nine months between my xAI type of GPUs." There's more and more pressure on going to market fast. A lot of these functions that typically you'd have the time to go put in an ASIC, you don't have the time anymore. You're gonna have to say, "Okay, FPGA is good. I mean, FPGA gives me the performance, gives me the cost, give me time to market, and by the way, gives me flexibility." The security algorithm, as I said, keeps changing.
What FPGA allows you to do, FPGA allows you to program in the field, in the future. Today, you program for a certain algorithm for security. The bad actors are gonna change those algorithms on you. They're gonna come back and tomorrow say, "Okay, hey, I just changed this encryption. Now I cannot know how to decrypt it." Okay, well, guess what? I'm gonna be a step ahead of you. I can go change this in the field in my FPGA. Many, many factors. The ease of use, AI will allow us to close the gap. To, you know, look at Cursor, look at Claude, being able to program at higher level, AI will allow to close the gap and make FPGA broader and more available and more interesting.
We're seeing adoption across the board, but yet you've got you want adoption with low power, low cost, fast boot time, you know, small area, and that's what we provide. All of a sudden, we're seeing this thing mushroom. I mean, people in the meetings today saying, "Hey, we've never seen FPGA grow at these rates." This is all the factors that are helping us grow the company at this rate. Make sense?
Yeah.
Yeah. Another question there.
Could you elaborate on the humanoid opportunity in terms of how you provide value to that new market and what you see as the timeframe over which it will evolve into a very big market?
Yeah. Yeah. Look, first we started with vision. You know, humanoid obviously has to see where it's going. It needs vision. You've got many cameras. Sometimes you've got a LiDAR, a radar to guide that humanoid, right? Near vision, you need to have an image processor that allows you to do this image processing, to do the control, to link up all of these different vision pieces together, and we provide that in FPGA. Next, you've got motors. You know, there are many podcasts that talk about the hands, each of the hands having upward of 20 motor per hand.
You've got motors all over the torso and the shoulder and the legs, could be anywhere from 45 to 70 motor controllers, and we could provide the motor controlling, you know, support, control. You think about the humanoid having 70 motors. The 70 motors better be synchronized and work together at the same rate. We provide connectivity to provide this EtherCAT that allows you to synchronize all these motors together. A lot of value we provide to a—s ecurity, by the way.
Timeframe. He asked about the timeframe.
Timeframe, look, I mean, depends who you hear. There's a probably a 2x difference in projection of how this thing grows, but it starts next year. It starts in 2017. 2027, sorry. 2027.
Can you talk about, you know, when you see this kind of visibility, you know, out to mid-next year in some of these markets, can you talk about the risk of double ordering, anything like that going on?
Yeah. You know, I talked earlier about how we're utilizing the extended lead times to get more visibility and, you know, to expand a little bit more on what I said about tightening the channel inventory. We are really applying rigor to overall demand we are seeing to make sure that customers are aligning their orders, from us with what they see as demand, and in fact, trying to do our best to look at the rest of the kit that they have when they're BOM, so that they don't have our parts sitting on the shelves with no memory, for instance.
You know, we're then working with our overall, you know, sales force in different areas to see what they, you know, they are hearing on the ground in terms of the end demand past that. I think there's a lot more rigor that we're applying as we are managing through this with the priority being supplying our end customers with the parts they need and doing additional checks on it. You know, is there some double ordering? You know, maybe. Because we're allowing customers to book out in time, and giving them confidence we will give them the supply, we think that tendency is diminished.
By, you know, tightening the channel, we try to decrease any buffer that's held there as well. You know, it's not gonna be perfect, but we're doing our best to not be surprised by it.
Great. Thank you. The other question we get, when FPGA start growing into high volume spaces like AI, is ASIC replacement risk—
Mm-hmm.
—ASSP replacement risk.
Yeah.
Can you talk about that?
I mean, look, if we were a really expensive part that, you know, had a lot of power and very visible on these boards, we'd be at risk of being replaced. Today we're into very reasonable prices, and we have so many applications and across so many vendors. You know, if a hyperscaler were to come in and design their own to replace us, they're gonna miss. I mean, we learn across all of the hyperscaler, the Neocloud, the enterprise, the server vendors, the ODMs, and come up with these FPGAs that are optimized across all these use cases, across all these vendors, across all these functions. In so many use cases, it doesn't make sense to replace us.
It's not like we've got one really expensive high power, high runner that you can easily identify and say, "I'm gonna replace."
Mm-hmm.
I mean, this is a widespread, you know, many tens of use cases.
That's right.
For you know, rack, for industrial applications, so.
Right. We are, you know, in the BOM of our customers, you know, we are at the small tail of their costs.
Mm-hmm.
As Ford said, it's a multitude of functions, so they would have to contemplate doing a multitude of ASICs, which doesn't seem to make economic sense.
Yeah. Okay. Helpful. Thank you. Obviously a lot of growth on your plate here, but can you talk about M&A strategy? Is there anything that you would think about where you need to add capabilities?
Yeah. Look, we've done actually, a few, small tuck-ins that we haven't discussed because they're small. Like a few, either acqui-hire or IP licensing or, small tuck-ins to mostly around IP, software tools and solutions, you know. Over time, I think we feel very strongly about the organic growth that we have. Is very strong organic growth, so we don't have to do an M&A.
Mm-hmm.
Now, if the right one shows up and aligns with what I just described on the vision that we just outlined, it would make sense for us to move forward. We're gonna keep a high bar and be disciplined on how we do it.
I mean, do you need scale in this business given the size of your two bigger competitors?
Look, scale always helps. I mean, so, we can get scale via acquisition, or we can get scale via organically growing it. The cost per engineer from the time I joined to now, we drove a cost per engineer from $150,000 per head to $100 ,000 per head. We're growing very fast in India and Penang and Philippines and so we're looking to grow in Taiwan as well. We're growing in all these geographies and adding a lot of people organically.
Yeah.
Ideally, yes, we can do an M&A, but that's not a requirement.
Yeah. You know, the way we have thought about scale on the organic side is exactly as Ford described, Joe, which is, you know, we actually lowered OpEx between 2024 and 2025.
Mm-hmm.
We have more engineers.
Mm-hmm.
We're helping the productivity of those engineers, not with just the IP acquisitions like Ford said, but also we've deliberately invested in infrastructure. You know, some unnoticed, unglamorous type of investment, but it's really helped the productivity of the engineers.
Okay.
We're getting scale that way.
Very helpful. We have two minutes left. Anybody has any questions?
Yes. The question is, can we give a rough breakdown of our industrial segment? What I would tell you, our industrial segment could go back to what it used to be. If you think about it, we do break down the industrial and comms and compute. Last year, industrial was about $195 million out of the $523 million. We did $523 million. $195 million was industrial, $290 million was comms and compute, and the rest was consumer. Okay? This year, we expect the comms and compute to become over 60% of our revenue, you know, from sort of 50%-56% to over 60%. Industrial, consumer to be roughly flat and industrial to be the remainder.
You could see what that growth is. Next year, we expect industrial to go back and increase faster. What happened to us in industrial, we built this inventory, and we've been bleeding the inventory at the rate of about three weeks per quarter for the past, you know, five quarters. We're now through that, and we're down to inventory level that makes sense. We're now gonna be shipping our revenue at natural demand, which will help this year. Next year, we do expect a inflection in industrial because of all these new applications and products that I discussed. At high level, we are committed to be a two-legged stool to start. Leg number one would be this comms and compute. That's very strong today.
Leg number two, industrial. That's gonna get stronger and continue to get stronger next year. We wanna develop a third leg over time, that could be around the solution angle. Eventually, we're gonna have a more stable three-legged stool to continue to grow the company.
I'm not sure if underneath your question might be what's in industrial. We actually call it industrial and auto. You probably derive from the commentary, auto is not a very big part of our business. We don't have a lot of ex-exposure there. Within auto, we have classic industrial factory automation type of applications. We have medical applications. We have defense as an emerging application set for us. We have industrial power management, a bunch of things like that. It's a very broad category for us. We don't break it out any further. Despite the fact that it's named industrial and auto is not a significant part of our overall business.
We'll wrap it up there. Thank you so much.
Thank you.
Thank you. Good to see you.