Good day and thank you for standing by. Welcome to the Rackspace's 1st quarter 2026 earnings webcast. At this time, all participants are in listen only mode. After the speakers presentation, there'll be a question and answer session. To ask a question during the session, you'll need to press star one one on your telephone. You will then hear an automated message advising your hand is raised. To withdraw your question, please press star one one again. Please be advised that today's conference is being recorded. I'd now like to hand the conference over to Sagar Hebbar, Head of Investors Relations . Please go ahead.
Thank you, and welcome to Rackspace Technology's first quarter 2026 earnings conference call. I'm Sagar Hebbar, Head of Investor Relations . Joining me today are Gajen Kandiah, our Chief Executive Officer, and Mark Marino, our Chief Financial Officer, Rackspace Technology. As a reminder, certain comments we make on this call will be forward-looking. These statements involve risks and uncertainties which could cause actual results to differ. A discussion of these risks and uncertainties is included in our SEC filings. Rackspace Technology assumes no obligation to update the information presented on the call except as required by law.
In particular, our discussion today will include forward-looking statements regarding our recently announced memorandum of understanding with AMD, including statements regarding the anticipated scope, benefits, commercial potential of the collaboration, deployment timelines or financial projections, the expected execution of definitive agreements, and the anticipated impact of the partnership on our business, financial results, and capital structure.
The MoU represents a non-binding framework only and does not constitute a binding commitment by either party to complete any specific transaction, financing, or other commercial arrangement. No definitive agreements with AMD have been reached. Discussions remain preliminary, and there can be no assurance that any such arrangements will be entered into, that the parties will reach agreement on terms, or that the anticipated benefits of the collaboration will be realized. Any third-party financing required to implement the transactions contemplated by the MoU is subject to the availability of financing on acceptable terms.
There can be no assurance that any such financing will be obtained. Our presentation includes certain non-GAAP financial measures and adjustments to these measures, which we believe provide useful information to our investors. In accordance with SEC rules, we have provided a reconciliation of these measures to their most directly comparable GAAP measures in the earnings press release and presentation, both of which are available on our investor relations website. I will now turn the call over to Gajen for an update on the business.
Thank you, Sagar. Last quarter, I said Rackspace was moving beyond being an infrastructure provider to becoming the orchestrator and operator of enterprise AI in regulated environments. We laid out three specifics: a partnership with Palantir anchored by a core build-out of forward deployed engineers, a technology stack with VMware as the control plane, Rubrik for cyber resilience, and Palantir as the data and AI platform layer spanning infrastructure, resilience, and AI, and accelerating demand for Private Cloud in regulated environments. The results this quarter reinforce the strategy we've been executing against. What we call where enterprise AI goes to production. Governed infrastructure as the foundation, an integrated technology stack of curated partners on top of it, and one accountable operator running it end-to-end. Every win this quarter sits inside that frame. We secured regulated and sovereign Private Cloud deals across healthcare, telecoms, and financial services.
We also closed our first joint Palantir deal in 41 days, a U.S.-based solar tracking manufacturer where the problem was costly and quantifiable. 16.5 days to move from a customer inquiry to a signed quote, burdened by manual intake and fragmented handoffs. Our FDEs deployed AI-enabled workflows on Palantir Foundry directly inside the customer's environment, reducing the quoting cycle by 94% and earning an expanded engagement to extend the FDE model into EMEA. We are also deploying Palantir inside Rackspace, running end-to-end business workflows on Foundry natively. We are not just recommending Palantir to customers, we are operating our own business on it. We continue to expand our partner ecosystem. Today, I am pleased to announce the signing of a memorandum of understanding with AMD that establishes a new category of governed enterprise AI infrastructure.
We are integrating AMD Instinct GPU accelerators, AMD EPYC CPUs, and the ROCm software ecosystem into a fully managed, governed technology stack, purpose-built for enterprise, including healthcare, financial services, and sovereign environments where security, compliance, and accountability are non-negotiable. The MoU establishes AMD as the launch silicon across our four integrated capabilities. Enterprise AI Cloud, our fully managed private, public, and sovereign AI environment with one operator accountable across the stack. Enterprise Inference Engine, a context-aware inference runtime that retains domain knowledge, session history, and enterprise-specific data context across queries. With Rackspace owning the SLA, inference as a service, dedicated accelerated compute as a governed alternative to commodity GPU rental, launching with AMD Instinct and bare metal accelerated compute, launching with AMD Instinct for training and inference workloads requiring deterministic performance. Production inference is heterogeneous.
Frontier models run on GPU, small language models, classical ML embeddings, and many domain-specific workloads run more efficiently on CPU. AMD is the partner that brings both Instinct GPUs and EPYC CPUs inside one integrated architecture, which lets us route each workload to the right compute. That is what production economics requires. This puts Rackspace in a unique category. The market today is dominated by commodity GPU rental, where capacity is sold by the hour and the customer carries the burden of integration, security, and accountability. We are building the opposite. AMD's leadership in open high-performance AI acceleration, combined with our operator-grade outcomes as a service model, delivers governed AI infrastructure that is accountable from silicon to outcomes. We expect the definitive agreement with AMD to be executed in the near term. Governed infrastructure is where enterprise AI either succeeds or stalls.
When AI works with patient records, financial data, or sovereign information, where that data sits and how access is governed determines compliance or exposure. That is why Rackspace's over 25-year history managing data centers and infrastructure is more important than ever, and this is why one of the largest Epic environments runs on Rackspace. The second reason enterprises choose us is how we handle technical complexity. Enterprise AI Cloud is not a single component problem. It takes data, compute, models, Small Language Models, inference, and governance working together in real time. If even one element in the technology stack is off, cost per token skyrockets and operational risk increases. We solve this by integrating each vendor's IP, making technologies fit together and operate as one. The third reason is accountability. In a fragmented Enterprise AI Cloud vendor ecosystem, nobody owns the outcome or takes responsibility when something breaks down.
We solve that by being one accountable partner in the eyes of the customer, responsible for how the system performs and the outcome it delivers. That is why we are seeing momentum across the business. At our core, Rackspace is a data center and infrastructure company. We own and operate the physical infrastructure that enterprise AI runs on. That foundation, combined with our ability to take end-to-end accountability for AI in production from governed Private Cloud to AI inference and agents in production, is exactly what our enterprise customers are looking for. With that, let me get into our business performance, starting with Private Cloud. first quarter Private Cloud revenue was $235 million, with first half revenue on track with the timing of a large deal onboarding within our healthcare vertical. Consistent with the dynamics we outlined last quarter.
Segment operating margin came in at 24.7%, up 30 basis points year-over-year, driven by continued cost discipline. Our customer wins this quarter tell a consistent story. Enterprises in regulated industries are choosing Rackspace to modernize and operate environments where governance, reliability, and compliance are non-negotiable, and where those environments increasingly serve as the foundation for AI adoption. For example, in financial services, we secured a long-term recommitment from a leading global online trading platform, modernizing core infrastructure through software-defined private cloud, improving resilience and user experience in a latency-sensitive, highly regulated environment. In healthcare, we signed a multi-year agreement with a major U.K. NHS Foundation trust to migrate and operate workloads in a sovereign healthcare cloud with full outcome as a service and security embedded from the outset. This quarter, we expanded our relationship with AdventHealth, a long-standing customer.
We already host and manage the infrastructure of their Epic EHR, 1 of the top 5 Epic systems in the world. This quarter, we expanded our relationship to host and manage over 400 additional workloads on Rackspace Private Cloud. Healthcare is 1 of our most important verticals and 1 of the clearest expressions of our strategy. Epic Managed Services is proprietary Rackspace IP, purpose-built for governance, performance, and uptime that clinical environments demand. As regulated healthcare organizations move from AI experimentation to AI in production, where data sits and how it's governed becomes the defining question. That is exactly the environment we are built to operate. This extends into sovereign markets. In Saudi Arabia, our partnership with SDAIA places us inside 1 of the world's most advanced national AI programs, built on in-country infrastructure, jurisdictional accountability, and managed operations.
In the U.K., BT recently selected Rackspace as the infrastructure foundation for BT Sovereign Cloud, positioned as U.K.'s first full suite of sovereign services hosted and operated entirely within the U.K., with security cleared operations teams and managed services covering migration, operations, and ongoing compliance. That is the kind of public anchor that validates our sovereign thesis. These are environments where AI cannot be deployed without full control over data and infrastructure, and they are increasingly central to how sovereign and enterprise AI is deployed. What makes these environments possible at scale is VMware Cloud Foundation 9, the control plane at the center of our governed AI strategy. It unifies compute, storage, networking, and security into one operating substrate with native AI workload support, data residency controls, and policy enforcement that meets regulated and sovereign requirements out of the box.
Our deepening partnership with Broadcom around VCF 9 is one of the most strategic commitments we are making this year because it gives our customers a single control plane that travels with the workload with elasticity to public cloud where it makes sense. Running on top of that foundation is where our AI platform partnerships come to life. This quarter, we expanded our relationship with Uniphore, adding agent-based workflows to our governed AI technology stack. Together, we are building context-aware inference, a capability that retains domain knowledge, session history, and enterprise-specific data context across queries. AI agents and large language models perform with the consistency and institutional memory that production environments require. Like Palantir, our engineers are trained on the Uniphore platform and embedded directly inside customer environments. We are not just orchestrating infrastructure, we are orchestrating outcomes.
VCF 9 as the control plane, Dell for core infrastructure, Palantir and Uniphore for governed AI and agent workflows, Rubrik for data resilience, AMD for enterprise-ready compute. Each partner is best in class, but the value Rackspace delivers is making them operate as one integrated system with full accountability for how the system performs and the outcomes it delivers. Looking ahead, the next phase is already emerging. As enterprise AI evolves towards agentic workflows, where machines interact with machines and processes run end to end without human intervention, the demands of governed infrastructure become even more acute. Training will largely sit with specialized providers, but inference, particularly context-aware inference on regulated data, is where production enterprise AI lives. That is the workload we are built to operate.
As customers develop a clearer picture of their data residency requirements, more of those workloads will move into governed Private Cloud, deployed across our global data center footprint in the jurisdictions and sovereignty zones our customers require. That is why we are doubling down on VCF 9 and Broadcom this year. Our full year Private Cloud growth outlook remains on track. We have signed engagements with AdventHealth, Seattle Children's, and a strategic database as a service partner onboarding through the rest of the year. We are also seeing encouraging pipeline momentum on our Palantir and Uniphore partnerships, where context-aware inference and governed agent workflows are gaining traction at deal sizes that we have not historically seen. The AMD partnership announced today adds a further layer of future optionality as governed AI compute becomes more central to how regulated enterprises operate.
Together, these give us confidence in the full year private cloud growth profile we are reaffirming today. For our public cloud update. First quarter public cloud revenue was $443 million. Services revenue grew 10%, reflecting our continued shift towards higher value engagements. Our customer wins this quarter highlight the breadth of our platform capabilities and our deepening presence in the AI space. We are powering a large scale, enterprise-wide multi-cloud transformation for a leading healthcare technology organization. Through a governance model, we are delivering program managed migrations, modern architecture, intelligent automation, and measurable cost optimization, ensuring each workload is placed on the right platform for the right reasons.
Second, Rackspace is serving as the implementation and managed services delivery engine for a high growth AI native database as a service partner operating across both Public Cloud and Private Cloud environments. Our execution capabilities are a direct accelerant to our partners' client acquisition and market expansion, reflecting a high value compounding partnership, driving differentiated multi-cloud database as a service outcomes. Our service portfolio is built for where enterprise AI is headed, production, not experimentation. We are embedding engineers directly into customer environments, moving from strategy to live deployment in weeks, with governance and accountability built in from day one. New partnerships expand our ability to deploy context-aware inference, governed agent workflows, and forward deployed engineers inside customer environments, giving enterprises a governed path from strategy to inference workloads in production.
We are complementing this with purpose-built capabilities in AIOps, identity security, and data resilience, addressing the operational and security demands that become non-negotiable once AI moves into production environments. In summary, Public Cloud is executing. As inference workloads move into production, we are increasingly positioned as the partner enterprises rely on to operate, secure, and optimize their cloud environments with full accountability to match. The results this quarter confirm the thesis. Governed AI infrastructure as the foundation, an integrated technology stack of curated partners running on top of it, one accountable operator responsible for the outcomes. That is what today's Rackspace delivers. With that, I will turn it over to Mark for our financial results.
Thank you, Gajen. In the first quarter, total company GAAP revenue was $678 million, up 2% year-over-year, driven by solid public cloud performance. Non-GAAP gross profit margin was 18.3% of GAAP revenue, down 160 basis points year-over-year, reflecting the Private Cloud revenue timing dynamics we discussed. Non-GAAP operating profit was $31 million, up 20% year-over-year, driven by continued operating expense discipline. Non-GAAP loss per share was $0.06, flat year-over-year. Cash flow from operations was $5 million, free cash flow was -$9 million. We ended the quarter with $94 million in cash and $295 million in total liquidity, inclusive of the undrawn portion of our revolving credit facility.
During the quarter, we repurchased approximately $96 million of debt, reflecting our continued commitment to disciplined capital allocation and active deleveraging. This reduces our interest burden and strengthens our overall capital structure. We are making deliberate progress on leverage reduction while continuing to invest in strategic growth. Turning to our segment results. Private Cloud GAAP revenue for the first quarter was $235 million, down 6% year-over-year, reflecting the timing of large deal onboarding within our healthcare vertical, consistent with the dynamics we outlined last quarter. Non-GAAP gross margin was 36%, down 110 basis points year-over-year, driven by lower fixed cost absorption on reduced revenue. Non-GAAP segment operating margin was 24.7%, an improvement of 30 basis points year-over-year, reflecting continued operating expense discipline.
In our public cloud segment, GAAP revenue was $443 million, up 7% year-over-year, with services revenue growing 10% year-over-year. Non-GAAP gross margin was 8.9%, down 60 basis points year-over-year, reflecting higher infrastructure costs. Non-GAAP segment operating margin was 4.7%, up 50 basis points year-over-year, driven by improved operating expense efficiency. Now on to our guidance. We are reaffirming our full year 2026 guidance in its entirety. Revenue, EBITDA, and cash flow outlook all remain unchanged. The Q1 Private Cloud timing we described is fully reflected in our annual plan, and our confidence in the full year outlook is unchanged. We continue to win larger, complex engagements that carry longer deployment cycles but deliver greater revenue visibility, higher lifetime value, and more durable recurring revenue streams.
As they come online throughout the year, we expect private cloud to reflect the growth profile we committed to for 2026. With that, I'll turn it back over to Gajen.
The market is trending in line with our expectations, and this quarter we delivered proof across every layer of that thesis. Regulated enterprises are making a deliberate decision about where their AI runs, who operates it, and who is accountable for outcomes. Healthcare is now a pillar. One of the top five Epic workloads in the world runs on Rackspace governed AI infrastructure. Epic Managed Services is proprietary Rackspace IP, decades in the making, and increasingly the foundation our healthcare customers are choosing as AI moves into production. Sovereign is validated. BT Sovereign Cloud runs on Rackspace governed AI infrastructure. SDAIA in Saudi Arabia places us inside one of the world's most advanced national AI programs. These are anchor commitments, not pilots. The technology stack is complete, and this quarter we extended it further.
VMware Cloud Foundation 9 as the control plane running across private, public, edge, and sovereign environments. Palantir for governed data and AI operations with our first joint deal closing and a growing pipeline. Uniphore enabling agent-based workflows with context-aware inference. Rubrik for data resilience. AMD, where we are establishing a new category of governed Enterprise AI infrastructure, delivering 4 integrated capabilities from silicon to outcomes. Enterprise AI Cloud, Enterprise Inference Engine, inference as a service, and bare metal AMD Instinct. One integrated system with an investment-grade counterparty co-invested in our success, and Rackspace accountable for how it performs end-to-end. We are the operator of the full Enterprise AI technology stack. One accountable partner where Enterprise AI goes to production. That is Rackspace. Thank you to our customers, partners, and every Racker. With that, back to Sagar.
Thank you, Gajen Kandiah. Let us begin the question and answer session. Please go ahead.
As a reminder, to ask a question, please press star 11 on your telephone and wait for your name to be announced. To withdraw your question, please press star 11 again. Our first question comes from Kevin McVeigh with UBS.
Great. Thanks so much. Good morning, let me start just congratulating you folks because obviously there's been a lot of work to be done to get you folks to this level and a lot of patience and, you know, just that needs to be recognized. I think, I just wanted to kind of highlight that because there's a lot that's going into the results that are here today. I guess, there was an incredible amount of detail, Gajen, but maybe talk to how AMD dovetails into Palantir and, you know, what else It sounds like the MoU is pretty far along. What else needs to be done just to I guess get it across the goal line?
Sounds like it is, but, you know, is there anything, you know, in terms of we should look for just as that officially gets signed, or is it officially signed? It just again, it seems like it's pretty far along, but just if you could help us with that a little bit.
Hey, Kevin, thank you, and appreciate your comments. Now look, I think when we look at this, you know, I would sort of think about Palantir and AMD somewhat distinct from each other, just so that Starting with the Palantir relationship, you know, that's really all about deploying and running customer workflows for the customer with Forward Deployed Engineers, somewhat independent of what compute platform it runs on, right? Really think about compute more as what's the most efficient place to run that work, any given workload.
The AMD piece really fits into how do, you know, first and foremost, it gives us CPU and GPU, which I think as we move further into inference and production workloads, you know, being able to deliver that in an efficient manner allows us to now do it across sort of the CPU, GPU stack. In terms of the partnership itself, I think we are, you know, we are certainly well along the way there. You know, I think we still need to get the financing locked down, and sort of, you know, tightened up, but we feel pretty confident that we are on our way to getting that done. Hopefully get it announced here in the near future.
We feel pretty good about it.
That's super helpful. Just, Gajen, if you could remind us the capacity in the Private Cloud versus you know, the Public and, you know, as these initiatives kind of scale, particularly AMD and Palantir, is that primarily across the Private Cloud as opposed to the Public? You know, just maybe help us understand that a little bit because obviously there's a, there's a lot to digest and just a really, really nice outcome.
No, great question, Kevin. You know, this is sort of this marked confusion. At least I think of it that way, right? Customer workloads are gonna run across private and public, depending on where that workload needs to land, right? That's why sort of our VCF 9 partnership, the Broadcom-VMware partnership gives us sort of think of that, the control plane across which we could somewhat elastically drive the workload, whether it be in private or public cloud. Capacity-wise, you know, we have the partnerships on the public side, and now we have the partnership and, you know, hopefully here soon, the compute side up and running from a GPU perspective as well.
Which allows us then to really be somewhat agnostic with the customer, really focus on what specific outcome they want, and then how do we deliver that in the most efficient way for them, across either a CPU or a GPU landscape, and that could be private or public, right? Like you said at the beginning, you know, Kevin, there's like a ton of work that goes into sort of figuring all of this stuff out. You know, part of the challenge our customers have, right, is to think all of that stuff through, right? In terms of, you know, we're building a Small Language Model or you're running on a Large Language Model. You know, where do you run the inference? Where do you know, how do you orchestrate that?
How do you ensure that it's running as efficiently as possible, secure as possible. Data residency is thought through. All of those. You know, and our ambition is, you know, how do you take that complexity off the table for them? With our forward deployed engineers really enable, support, and accelerate their journey to become, you know, more AI enabled or operate on a fully AI stack. That's the opportunity we saw, and that's what we are, you know, truly, you know, and our customers are really guiding us through this. We are pretty excited about it.
No, it's amazing. Just one more. I wanna be respectful of your time, but, you know, it sounds like, you know, any sense of how this starts to kind of fan in? It sounds like maybe the back half of 2026. Is there any way to think about kinda just what type of margin this work would be coming in at? I know it's probably relatively, maybe a tougher question, but just any way to think about that, and then what potential capital needs you could have as you're standing some of this stuff up?
You know, I think, you know, we are Think of it this way, Kevin. We think of There are very four distinct capability sets, if you will, right? For lack of a better way that we are bringing to market, right? It's governed Private Cloud on AMD Silicon, right? Think of that as we own the entire outcome for our customer in partnership with our customer, so they don't think about anything that sits in between, right? You know, that would be, if you think of it through the lens of margin, probably our most profitable business.
You know, then there's context-aware inference, which is really the next level of, you know, business where you are driving domain-specific data through inference and maintaining that domain data throughout the entire process. That's probably your next tier when you think about margin coming down, if you will, right? There is the inference there. Just purely we are providing the tokens or the intelligence customers are using it through an API. Lastly, sort of, you know, a lot of what the neo clouds do, which is the, you know, bare metal, right? Which is probably your lowest end on the margin, right? Yeah, I think that as we ramp up, we will see our business sort of fluctuate across these four areas. Obviously, our intent is to end up with, you know, fully managed governed outcomes.
There's a journey to get there, and I think that's something we need to work our way through before, you know, we can give, you know, clear guidance around how that plays out.
Yeah. Hey, Kevin, this is Mark. I would agree with that. I also think that, you know, it's going to be largely on par, if not accretive to existing gross margin rates across our private cloud business. Just in terms of timing, you know, this is not something that we've got materially factored into our 2026 guidance, right? Just in terms of supply chain and delivery timing.
Well, listen, it sounds like you're well on your way. Again, congratulations. Thank you.
Our next question comes from David Paige with RBC Capital Markets.
Hi. Good morning. Thank you for taking my question, and congrats on the great results here. I guess just at a higher level, it seems like Rackspace is moving in the right direction. You're moving not only, you know, internally as a company, but where the industry is going in terms of, you know, CPU, GPU, running SLMs, LLMs, et cetera. I'm just curious, you know, you seem like you're the first, you know, you're the leader, but I guess, how's the competitive environment looking? I guess as a follow-up, you mentioned the pipeline is strong, so should we expect more deals in the future? Maybe just flush that out a little bit. Thank you.
Sure. good to meet you, David , and thank you for your comments as well. Now when I think about where we are, the orientation of the business right now is very much along the lines of helping customers really understand how, you know, how they want to run AI workloads, right? If you think about where we sit today in our private cloud business especially, a lot of the customer workloads that are regulated run on our environment. You know, the ability for us to sort of guide them from there onto running AI-based workloads is sort of where we are seeing the mofst opportunity.
When you look at with the partnerships, right, either on the application stack, the Palantir, Uniphore or on the compute stack, they just give us a much more integrated view of trying to tie all of this together. Not trying, but tying all of this together and delivering it. When you think of kind of your first question, in terms of competitive environment, I, you know, I haven't seen anyone yet that is able to put all of this together in one place and then own the outcome, right? I think that sort of makes a distinct difference, especially in a regulated or sovereign environment, because I think that it becomes you know, significantly unique.
To give you an example, just, you know, is like when I say governed in healthcare, it means HIPAA compliance, PHI security, clinical SLAs, right? All of that has to be put onto the same platform and integrated and then delivered, right? I'm not, you know I'm sure there will be competitors that show up, but having the consulting, the forward-deployed engineers, the infrastructure, the compute, and the partnership all stitched together, I hope it gives us a little bit of a lead and an edge in terms of where we sit. Sorry for the long answer, but hope that makes sense, David.
No, that was very helpful. Thank you. I agree, it does seem like you have that leadership position, which is great. I guess, yeah. No, thank you. That's helpful.
Thank you.
Yeah, maybe 1 more. There were some comments about the capital structure. It looks like it's getting into a better place. Just how should we think about the capital structure over the next 12 to 24 months just evolving? Thank you.
Yeah.
Hey, David, this is Mark. Look, our motivation or our intent is deleveraging, right? That's our top priority, right? As we think about some of the deals we've announced, some of our capital requirements for this year, right? The intent is ultimately with, you know, we've got our eye on 2028, the maturity, the debt stack that's gonna be due in the middle of 2028 and getting deleveraged through, you know, an increase in operating leverage, EBITDA, as well as additional cash flow. As we structure some of these deals, right? The intent isn't to go, you know, take on more, you know, expensive to kind of add to our existing debt maturities, but to, you know, structure things in a way that, you know, don't create further leverage. Right?
We have decreased our operating, our operating leverage by I think from 8.6 to 8.3 quarter-over-quarter, right? We continue to stay focused on, you know, the out quarters and finding ways to delever, right? You'll notice in the quarter we actually repurchased some of our debt, roughly $96 million notional at a pretty significant discount, right? We're looking for ways to deploy capital such that, you know, we can reduce that, get ourselves to refinance ability over the next probably 18 months.
Great. Thank you. That's very helpful. Congrats on the momentum and looking forward to working together.
Likewise. Thank you.
That concludes today's question and answer session. I'd like to turn the call back to Sagar Hebbar for closing remarks.
Thank you everyone for joining us. If you have any questions, please email us at ir@rackspace.com. Have a great rest of your day. Thanks, Liz.
Thank you. This concludes today's conference call. Thank you for participating. You may now disconnect.