All right. Good morning. Thanks everybody for being here, last day of the conference. My name's Frank Louthan. I'm the Senior Telecom Analyst here at Raymond James. We're very pleased to have Akamai Founder and CEO, Tom Leighton here with us. We're gonna start out with a few questions, and then we'll save some time at the end for you. Tom, maybe kind of walk us through, you know, the Akamai story, kind of where do you fit into the space and what you do. To kind of set the stage for everybody.
Sure. Most of our revenue is in Security, growing at about 10%. Market- leading products for web app firewall, stopping DDoS attacks, bot management, and more recently API security and Guardicore segmentation growing very fast. In Q4, they delivered $90 million growing at 35%, and so they're driving a lot of our future growth. A lot of investment around AI related kinds of security as enterprises, adopt more use of AI, that's a big new attack surface. You know, developing the capabilities there. Fastest growing product area is our cloud infrastructure services. Finished $94 million in Q4, 45% year-over-year. Even more exciting, we think that'll accelerate through this year. We're calling for 45%-50% growth in revenue. Really, really exciting.
A lot of big name companies using our cloud infrastructure services and including all the hyperscalers, which is a pretty cool validation. Of course, we operate the world's largest content delivery network, the largest and most reliable and scalable by a good margin.
All right, great. You know, you mentioned that, you know, the Compute platform. Talk to us about how that's kinda changed the business. What are customers looking for when they come to you for Compute, and how does that kinda change your go-to-market approach with customers?
Yeah, what we're trying to do for Compute is the same thing we did for Delivery when we created the content delivery marketplace, and then with Security. You know, created, you know, Security as a cloud service. You know, WAF operated in over 4,000 PoPs in 700 cities, so we can stop all those attacks before they get anywhere near the data center. By being close to users, it's better performance, more scalable, more reliable, and now it's the same story for Compute. You know, we can get our customers' Compute logic close to users, so it's faster, very scalable, and at a very good price point. It's very competitive in the marketplace.
Yeah. Closing that distance comes up all the time with investors. The, you know, the concept of just needing compute nodes near population densities and so forth. With that, you know, you've got your Inference Cloud product that you're launching. Talk to us about that. How does that fit in with the Compute platform, and what can investors expect from that business?
Yeah, as we look to the future, a lot of the use for Compute is gonna be AI inference, agents doing things on behalf of enterprises and users. A lot of the applications people are developing, you wanna have low latency. Now, historically, that was hard to do with AI because the models were slow, and if it took you a few seconds for the model to generate a response, well, it didn't really matter if you were close. But now with the latest generation of hardware, and we're deploying a lot of the new Blackwell 6,000s, they are fast in response. You can generate videos pretty much on the fly. Now the latency means something. Not only that, as the web gets more of these agents and apps deployed, it's gonna be more use of video. That's bit intensive.
You know, even if you didn't care about the latency, which you do, you have a problem with bandwidth coming out of a data center. It's not possible, for example, to have millions of concurrent sessions with users that are video-based with personalized video coming from a data center. Same thing, just like with, you know, from streaming big sporting events. You don't do that from a data center or two. You gotta do that, you know, in a distributed way. Akamai, we have this fabulous, unique distributor platform perfect for AI inference in the future.
All right. You've got a lot of the critical components here. You know, when we look at AI, where do you expect, you know, to have the most revenue impact? Is it gonna be, you know, you got delivering traffic, AI bot mitigation, Compute, inferencing at the edge? Where, where do you think you're gonna see the most impact from AI on all the different parts of your business?
First Compute, then Security, then Delivery. You know, Compute because that's where the action is. With AI, you have all these individualized responses. Security, in a variety of ways. First, the models are enabling the attackers to do more penetrations and be more capable, so you have more need for Security services, especially, you know, Guardicore segmentation. Second, as enterprises use AI in more places, that's a whole new attack surface, and you need special defenses, which we supply with our Security business. Also, you know, with the bot management or the agent management, you want tailored, individualized responses based on who the agent is and what it's doing, and that's what our bot management solution does. That's very helpful. Third is Delivery. Now with Delivery, the revenue generally is based on traffic, which is generally gigabytes delivered.
you know, the big sources of traffic on the internet are video and big software downloads, so a big gaming release. you know, a video is orders of magnitude more traffic than a image, and an image is orders of magnitude more traffic than a text, you know, an interaction buying a sweater. you know, the agents themselves aren't generating a lot of traffic per se. There could be a lot of hits, but not a lot of traffic where you will see the traffic generated, I think is as more of the applications on websites become video-based and more of your interaction with the web is video.
You know, you're talking to your shopping helper, it's a person who you know, it's a video of a person, you're getting real-time responses, your personal shopper says, "Hey, look at this sweater. Here's a video of you wearing the sweater at a dinner party." Okay? It's a personalized video. That will generate more traffic. That said, the value in terms of revenue is more for the generation of that video than the transmission of it. Compute would have more revenue benefit for it.
Okay. Let's, you know, look at this as customers move to more to, you know, to block it from optimize it, you know, on the CDN side and Security. How are you positioned to kind of benefit that as you're? You know, 'cause with all of this traffic, there's some of this traffic that you want to get through and some that you don't. How are you on the Security side and the Delivery side, how are you able to get some traction there?
Yeah, we've been doing that since before AI. You know, there's all sorts of bots out there and agents doing different things. You know, there's been scraper bots out there for, well, a decade doing all different kinds of things for scraping. Now there's a new scraper bot doing it to train models, you know, for AI. There's been just a whole plethora of different kinds of bots doing different things, and our bot management solution gives a differentiated response that our customers wanna give. Could be a partner bot. You wanna give a good service. Could be a Google search bot. You wanna give it a version of your site with all the right keywords and make it fast. Could be a price scraper bot. You wanna give it false prices 'cause it's your enemy trying to undercut you by a penny.
Could be a bot trying to fill up all the seats on your plane so that you'll buy the competitor's, you know, seats. We've been dealing with that, you know, for a decade. There's nothing really new there. There are the AI bots now, and that traffic has increased, particularly the scrapers so far. We just continue to give whatever service, differentiated service our customers wanna give.
Yeah. Okay. Talk to us about the Compute platform. I get this question a lot from investors and I think from a lot of generalists and so forth. How is your Compute platform differentiated from your competitors, and what is it about customers using your platform versus kind of the house names and the hyperscale world of, you know, large Compute companies that we all know? What is it about your business that attracts customers?
We're more distributed, so we're closer to the user, gives you better performance. We're extremely reliable. I think that's really important. We're very competitively priced. You know, I think a great proof point is all the hyperscalers use our Compute platform to do mission-critical things. It's not like they don't have their own big cloud platforms that they can use. They'll get better performance for particular applications where that matters using Akamai.
Well, what's an example of some of those applications? 'Cause just to get like a real world something that you're able to do, maybe an application or a product that everybody's familiar with that your platform is much better suited for than just the generic thing on one of the large hyperscalers.
Right. Well, one of the hyperscalers uses us for live video. If you watch a lot of sports online, you're actually using our Compute platform. You know, they chose us over their own capabilities, again, because we're close, so they can tailor the event that you're seeing to give the best possible viewing and to synchronize it. Everybody sees the action at exactly the same time, which, you know, could be useful if there's betting applications. Another hyperscaler uses us for ad selection, which again, you wanna do really fast, and it's better if you can do that locally.
You know, of course, we have a lot of commerce companies, and speed makes a lot of difference there, and as you adopt AI, you're gonna get a much richer experience, translates into a lot higher conversion rate, and performance matters a lot. Reliability matters a lot there. One of our very large new customers has capabilities to manage fleets of robots, cars, automated kinds of things. Again, you want that to be as real-time as possible.
Okay, great. On the distribution side, you know, talk to us a little bit about, you know, the status of sort of self-provisioning for CDN and how do you maintain relevance with all the large media companies and so forth. We've seen some M&A in that space, recently with Warner Brothers. You know, how does that impact you? And talk to us about the life supply for that.
Yeah, I don't think there's any impact from the acquisition. Both sides use us extensively already today. You know, there's a couple of exceptions, big media companies that do it all in-house, but otherwise, most of the rest primarily, you know, have most of the traffic with us. Most of them do some kind of load balancing among CDNs. In generally, situations will have a majority share.
Okay. That business trended pretty well for the industry last year. Pricing was a little bit better, traffic better than folks thought. What is the longevity of that? Have we kind of reached a plateau for that sort of business? What's sort of the outlook for that going forward?
Yeah. We've guided this year mid-single digit decline in the Delivery business. Revenue is, you know, the combination of traffic growth and per unit pricing decline. We're continuing to be very, you know, diligent about the pricing we offer. There is business that we don't take, and share we don't take, and some of the big spiky events we won't do if the economics aren't right. You know, we, I think are seeing better traffic the last year or so. Looks like that trend should continue, and we're being, you know, very diligent on the pricing. That helps us. There's some revenue we let, you know, go somewhere else, but it helps our business and certainly the profitability of that Delivery business.
Are you seeing any aspect of that business, especially with AI traffic and so forth, where customers are willing to or are seeing value to using you and paying more? Are we seeing some stability to that? I've seen that in some other areas of my coverage where the hyperscale customers want execution, and they want it as quick as possible, and they're willing to pay better margins to some infrastructure companies and so forth. Are you seeing the same thing for some Delivery applications? Maybe that's getting a little bit extra life.
Yeah. We generally are paid more than the competition, depending on the particular product. The application could be a lot more to a little more, because we offer better performance. We are a lot more reliable.
All right. let's talk a little about Security on that side of the business. You know, how's pricing holding up on Security? How should we think about that for the?
Pricing's holding up well there. We're, you know, have the market-leading solutions by a good margin. In some cases this year will be increasing pricing. As you know, the cost of memory has, you know, gone up quite a lot in the last few months. Some of that we'll be passing on to customers.
How does that cost of memory affect your business? Talk to us about the capital investment you have to make on the Security side, and is that an impediment to your growth? Are you seeing any issues with your ability to get the memory or is it? How is that impacting you?
No, it's not an impediment to growth. We can get everything that we need. We're a big buyer in the marketplace, but it's more expensive, so we're doing a lot internally to, you know, buy less than we might have otherwise. You know, a lot of our servers that we might have decommissioned this year, the math has changed on their useful life, and so we're gonna leave them in the field. They'll, you know, be working longer just because the memory cost has gone up so much. We've estimated that this year, after all the puts and takes, it'll be an extra $200 million for us in the increased pricing. You know, usually with these things, the supply gets increased, you know, the capacity and the production is increased, and that'll help, you know, abate the cost going forward.
We're not counting on that, so we're doing everything we can to optimize. You know, it'll be an extra $200 million this year.
When we look at that 200, what is the breakdown of that $200 million from additional data center space or, you know, servers? Is it just the?
That's purely the memory cost.
That's purely the memory cost.
Yes.
Okay. You're not having to add if you're keeping these older servers on, are you having to take in on more data center space?
No. It's just we're not taking on as much CapEx. Buying less of the memory, using the stuff that, you know, a little less efficient, you know. A six-year-old server, but working fine, and now the math has changed so that, you know, we're gonna keep it in service longer.
Okay, great. All right. With that, thinking about the, you know, the guidance for the year. What does it take to kind of reach the high end of the guidance for this year? Let us... What are your What's kind of built into the assumptions for that?
Yeah. The amount of traffic, you know, can be variable, and a strong traffic year helps the Delivery business. How fast we sign on the new Compute business, you know, can make a difference towards the back half of the year. Security, you know, that's a little bit more predictable. There are sometimes license deals, particularly as you get, you know, for example, sovereignty, other issues with critical infrastructure. Maybe they want it in-house, which we can do. The accounting treatment is different if they take control of it versus our taking control as a cloud service. That can swing things a little bit. Dollar fluctuations can make a little bit of a difference. You know, if the dollar's stronger, you know, that can depress the revenue through the conversion a little bit.
Dollar weakens, you know, the revenue goes up.
Yeah. Okay. Walk us through the path of Inference Cloud for the, for the year. You're making an investment in that. I think it's roughly $100 million, something like that. Walk us through the pace of that investment, when do we start to see, you know, that revenue coming in? How actively are you selling it, when will we start to see that showing up in the income statement?
Yeah. We talked about a $250 million investment into Inference Cloud and a large purchase of the Blackwell 6000s. The initial tranche we deployed, you know, last fall, in 20 cities. That actually goes GA at the end of the quarter, but it's already sold out from the beta customers. We're deploying a much larger tranche now. As we talked about on the call, we have a good chunk of that already committed in a four-year deal with a large account. You know, as we sell out the rest of it, which is not deployed yet, you know, then we would add more after that.
The revenue associated with the tranche that we're, you know, in the process of doing now, we'd be looking at towards the end of the year to start to get revenue. First, we gotta get the servers, we gotta deploy them, you know, get them all turned on and then used. That's towards the end of the year we're looking for revenue generation there. That's really more a big impact next year.
Is this incremental data center space that you're taking on this year to deploy that? Have you had any issue? How about how many megawatts of space does that require?
Yeah. A bunch of it we already have, from... You know, as we initially deploy, we sign long-term colo deals for increasing amounts of usage. We are adding new data center space now on top of that, so it's a blend. You know, the typical large size data center for us now is, you know, 10-ish, maybe 10-20 megawatts that we take on as a large size data center for us.
Okay. As you with, you know, breaking down some of that investment, how much of that is the servers? How much of that is the data center space you're taking on to deploy this?
Yeah. That would be the CapEx side of things. The Blackwell 6000s, associated hardware. Typical use cases are not just the 6000s, but we use actually our whole platform. It's the CapEx needed for that. In addition, there's, you know, colo space that in some cases we're already paying for or is already on our books because you linearize the accounting, you know, when you take on a long-term colo deal. In some cases it's new.
Okay. You mentioned there's I think you said 20 cities that you'll be going to. You know, the thing that I think is interesting for me being a telco guy is looking at the 4,600 locations you have on the Delivery network and finally being able to use that to bring this elusive concept of, you know, edge computing that has never seemed to have materialized. Are finally in a position to do some of that. As you look out at that network, talk to us about this is a question I get a lot from investors, like why couldn't they do? You know, how much can they deploy and where can they go?
One of the gating factors there is the power because a lot of your locations are on telco data center facilities or telco facilities that were built for voice, and so maybe have a little bit less power. If you look at all those locations, how many of those can you realistically deploy? I mean, you're looking at Inference Cloud. How many megawatts does it take to put a pod of servers out there and GPUs? How should we think about that?
Great question, the answer is it's hierarchical and depends on what you're trying to do. In all 4,300 locations, 700 cities, we operate function as a service, you know, our EdgeWorkers solution. That's everywhere. Totally serverless, and that's been out there for a little while now and used actively. The next tier up would be our managed container service, this is where we deploy customer containers in software into our existing hardware, so no extra deployments and stuff needed. That can go into any of the 4,300 locations. We're actually using it live today. It's been between 100-150 cities. Not all of them, but 100-150 of the larger regions. Actually, one of the hyperscalers is using that capability today.
That does your container. You know, stepping up another level is you have full stack Compute and storage. You know, VMs, big storage, object store, and that runs today in 36 cities, about 40 data centers. That's where we're deploying by and large the Blackwell 6000. We're in 20 cities today, and the next tranche will be maybe some new locations, but mostly beefing up the existing ones for the next tranche of the 6000s.
The critical place is having it in the city, not necessarily in all four or five or six PoPs in the city for having this deployed.
Yeah. The... You're right. The 6000s, they're not going into the 4,000 locations. They're not set up for that. They could, they could go into 100-200 cities over time.
Okay, great. That's great. All right. Why don't we see if we have any questions from the audience, and I got a couple more. Eric, go ahead.
Who are the buyers of the new full stack that you're rolling out?
Usually it goes to the CIO.
Okay.
You mean by the function, job function or?
The kind of organization. Are those hyperscalers or more enterprise buyers?
Enterprise buyers.
Enterprise buyers.
Including the hyperscaler companies.
Okay.
Yeah, the major enterprises for us initially was big media. Now commerce is heavily engaged. You know, our biggest customer is industrial.
If you don't mind, may I follow up?
Yeah, go ahead.
How do you normally price these? Are these one-year deals? If they're longer, are there escalators for, you know, things like memory and so forth?
Again, it's hierarchical depending what you wanna do. You can, you know, buy product-led motion on the website. It's two and a half bucks, you know, for a VM hour. And you know, you buy one hour if you want. As the customers get larger, now you're probably in a sales-led motion. Often there'll be a commit to a certain amount of usage over a period of a year, two years, three years, and the rep's comped for longer term deals. What's new for us now is you can buy clusters, and that would be multi-year commits. We talked about our first customer in that motion, that with a four-year, $200 million commit, and they're, you know, basically buying a cluster of the GPUs.
Got it.
What does a revenue kind of breakdown look like that at the low end of a cluster? What are we talking about as far as monthly recurring revenue, just to get an idea?
Well, it depends on the size for the monthly recurring revenue. You know, if you, if you buy at list, you know, two and a half bucks for a VM hour, obviously, if you're buying a cluster for four years, you're gonna pay a lower rate than that.
Yeah. Okay, great. All right. Any other questions from the audience? No? All right. Okay. Talk to us about capital allocation priorities for the next 12 months. How should we think about that?
Pretty much, you know, where we've been all along, I would say. We buy back the equity generally that we distribute to employees. We opportunistically buy back a little bit more, you know, on average, maybe 1% of the equity outstanding a year. Last year, we bought back more than we'd ever done before, about $800 million. Part of that was we did a convertible, and as part of that, bought back some as part of closing the convertible. Our strategy is the same. We use the capital for M&A. We also, you know, use it obviously for CapEx, which is part of the operating of the business. There's no intended shift in terms of our use of capital and buying back stock.
Talk to us about M&A. You bought, you know, quite a few different companies. Usually, it's, you're buying a product, you're buying capability. It's more of a buy versus build strategy. You're building the Inference Cloud yourself. Where do you see opportunities for M&A? Is it in Security? Is it in the Compute? Where do you think, you know, the things to plug in some platforms?
Yeah, we're looking in both areas. You know, security, very fast-moving landscape, obviously a lot of interest there. We've been very happy with our major acquisitions there with Noname for API security and Guardicore, you know, for micro-segmentation. There's other areas we're looking at. We're also we're very disciplined, we're not gonna pay a crazy amount of money for something. You know, it has to be something that'll return real value to shareholders. On the Compute side, we're looking at ways that, you know, we can, you know, enhance our capabilities there. You know, it's not quite the same thing as Security doing a lot of the Compute in-house.
Does anything stand out to you on the Security or the Compute side to say that you know, something that you would need to have or is an area that you'd like to move into where M&A might be more of a possibility?
You know, generally it would be some kind of product adjacency. It fits, you know, within our current product set. It makes sense to enhance our Security platform play. Something that we think our reps can sell, that we know the buyer because it's close to the buyer of some of our other Security products. Having it fit, you know, with Akamai is the kind of thing that, you know, we'll be looking at.
Okay, great. Anybody else got a question in the audience? No? All right. Kinda to wrap up here, you know, talk to us a little bit about, you know, about the company. I've been asking all my companies this question, sort of what's one big sorta misperception about the business? I think the obvious one for you guys is you're not a, you know, a CDN, necessarily anymore. That's part of the business, but that's not really what you are, even though you're often thought of that. Let's just to level set, take that one off the table. What is sorta...
When you talk to investors and you're talking to customers, what's the biggest misperception about Akamai that you'd, you know, like to kind of address and set the stage for?
I think the biggest change that investors are starting to understand is that, yeah, we are a cloud company and have a strong capability that's accelerating at a very fast rate. You know, I think, you know, cloud has been sort of a show me story for investors, and I think we're showing them. You know, we're getting very fast growth on a meaningful number now, signing up some impressive enterprises, including I don't know any cloud company that has all the hyperscalers as customers of their cloud business. You know, and I think that's a good proof point that if they're buying our cloud services, you know, there's something there. You know, really does give better performance. For non-cloud companies that don't have their own cloud, it is very competitively priced.
Okay. All right, great. Well, Tom, thank you very much for being here. Appreciate everybody coming. We've got a breakout session after this. If anybody's interested, feel free to join us. Thank you very much, Tom.
Great. Thanks.