All right, so, I think we're gonna get going in the interest of time. I know people are still moving around, but it's my pleasure to introduce Matt Garman, the CEO of AWS, representing AWS as part of Amazon.com today. First, I'm gonna read a safe harbor, and then Matt and I are gonna get into a conversation. During the conversation today, Matt will make forward-looking statements in addressing the questions and factors that could cause actual results to differ materially as described in AWS's periodic SEC filings, which are readily available, including on their website, www.aws.com. So, Matt, thanks for being part of the conference this year.
Yeah, thanks for having me.
Okay, so why don't we start with you sharing your background and the journey you've been on to the point where you became the CEO of AWS in the not-too-distant past? I think it's sort of an interesting story and journey to tell.
Sure, yeah. So it's been three months now that I've since I took over the CEO job, but I've been at AWS for eighteen years, and actually, my first interaction with Amazon, I was at business school, and in 2005, I did my summer internship for Andy Jassy, and he was doing an internal startup inside of Amazon, which was AWS, and so, that was my intern project. And then I came back as the first product manager for AWS, and then, that's what I've been doing for the last eighteen years. And I started out leading engineering teams. I built many of the core services and helped on things like EC2 and networking and compute and storage and a bunch of our core kind of AWS building blocks.
Then, four years ago, switched roles and actually went away from product and engineering and led sales and marketing, globally, and then took over this job about three months ago.
Okay, so I think one of the biggest investor debates we have all the time is where we are in terms of broad cloud computing adoption.
Mm-hmm.
So can you level-set your worldview on that and how you see AWS's position in the broader cloud computing landscape?
Yeah, look, look, we're about 18 years into the cloud now, and so it's a pretty well-established technology, and yet there's still the vast, vast, vast majority of workloads that have yet to move to the cloud. And so, you know, I think you all probably have lots of estimates as to how many workloads that are still on-premise or have moved to the cloud, and you spend probably more time thinking about that than I do. But, you know, I think you're hard-pressed to find somebody that thinks it's more than 10%-20% of the workloads out there.
And so what that means is there is a massive set of workloads that are still running in on-prem data centers, and you all at Goldman are excellent AWS customers of ours, thank you, and you run lots of data centers yourselves still, and there's still lots of stuff to go. That is just the nature of the industry that we're in, a lot of these workloads, whether it's because they're running on mainframes or because you have assets that haven't been fully amortized, or you just haven't fully moved things, or it's the technology. Think about telco infrastructure that's out there at RAN sites and things like that, that hasn't yet been cloud-enabled, and it's still kind of traditional infrastructure, or at least for the most part.
And so the vast majority of workloads haven't moved yet, and we're still at the very early stages of that. We're spending a lot of time helping customers 'cause, that said, most customers, if you could really give them an easy button, you know, one of those buttons that just push, and it happens magically, most people would move those workloads in a heartbeat. And so, we're really helping customers understand how they can move more quickly, how they can get their workloads in the cloud, because the agility you gain, the ability to adopt new technologies much more quickly and take advantage of all the new technologies out there, is so much easier when you're running in the cloud than if you're having to buy your own gear and run it in your own data centers and run that way.
Cause it turns out, if you buy a server, and you stick it in your data center, you're on that server for the next five years. You have no flexibility of taking advantage of new technologies or new capabilities, and particularly in the world of AI, most of that is operating in the cloud today, and so much of that is also pushing people to move to the cloud more quickly.
Okay, before we get to AI, maybe let's build on that last answer. You know, in your view, what are the key differentiators that allow AWS to keep winning new customers and growing revenue share with existing customers when you look at the landscape right now? So sort of the differentiation point.
Yeah, look, I view it as it's how we approach our customers out there, and it is really no different than from when we first started the business eighteen years ago. And when we think about it, we think our differentiation is a couple of things. Number one is we listen to our customers, and we build what our customers ask us for. And when you talk to almost any customer of any type, whether it's a startup, whether it's a large enterprise, whether they're a government customer, across the board, the most important things that they are looking for is outstanding operational excellence and world-class security, and then a partner who's gonna be very focused on them and be very partner-focused to help them get through problems.
And then, folks then say, "Great, if you have that baseline, and those are the most important things that you focus on, and I can trust my business to you, then I'm interested in how you're helping me innovate more fast, more rapidly, how you're building new technologies, and how you're really leaning forward." And so that's how we approach customers today, and it's whether you're a small customer or the very largest customer, we say, "Security is first. It's not bolted on after the fact. It's not because I've had a bunch of security issues, and so now I guess I have to focus on it, and operational excellence." Those are from the very first time, days, that we started AWS. It's how we focus. And then we just focus on customers, and we listen to our customers.
We really listen to where are the problems, where are the technologies and things that are not working today, where are the pain points you have out there today, and how can we help go innovate to help you move faster and help every single one of our customers just focus on the things that make them interesting and unique, as opposed to what we call undifferentiated heavy lifting, which is pieces of the technology stack that really don't differentiate your company, as opposed to the IP and things you build on top of it?...
That's been how we approach customers from day one, and I think that it often resonates with customers, and they love that that's how we go in their business, not because we have onerous licensing terms, or not because they feel like they have to use us, but because we're the best solution to help them move their businesses forward. That is how we've grown the business today. It's why we see the business accelerating from where we are, even though we're already at a $105 billion run rate, we still see the growth accelerating, and are quite bullish about where the future lies.
So just maybe one follow-up there. Can you isolate any products and services where you believe you're prioritizing that are driving potential positive customer outcomes or driving innovation and adoption across AWS right now?
Yeah, I think if you look even at the base layers of where we think about compute and storage and databases, AWS has been innovating for the last decade at a level that others haven't. And so if you think about from ten years ago, we went on a path to start innovating on our own custom silicon. We started on a path to innovate at the very base layers of hypervisors and virtualizations and networks and data centers and power infrastructure and supply chain, and across the board. These are non-glamorous things necessarily, but they're very differentiating. It means that we have a very differentiated security posture than anyone else. It means we have a differentiated cost structure than anyone else. It means that we have custom-made processors, where we can deliver outsized performance and better price performance gains than anyone else.
Then we think about how do we continue to build on top of that. I think for a long time, many of the other competitors were much more focused on how they protect their legacy business as opposed to innovating. So when you think about the database world, as an example, we leaned into open source from the very beginning because we didn't want people locked into our products with proprietary licensing. We wanted people—we wanted to have a scalable, well-run, excellent operating database for customers to be able to use. So we were free to innovate on a number of different levels, whether it's a NoSQL database, whether it's a cloud purpose-built database like Aurora. And that, and that's true if you go across the board, across our sets of products. We really lean into how do we build great products for customers.
That mentality has allowed us to differentiate ourselves, and if you look across the board, we have the absolute best compute layer with Graviton, with Trainium, with Intel, with NVIDIA, with AMD, all fantastic partners, because we really focus on: How does that compute stay available? How does it have great performance characteristics? And that is differentiating versus everyone else. You look at that at the network layer; it's true. If you look at it at the storage layer, S3 is by far the first service that we launched, and we have continued to invest heavily to improve performance, reduce costs, and continue to scale out with the world. And that's true across almost every single product that you look at, whether it's analytics, whether it's monitoring, whether it's compute, storage, et cetera.
Of course, AI services, which I'm not trying to jump to your question, but all across the board, and relentlessly, we focus on innovating for customers and listening to what they need. So when customers tell us they have a new problem or they're not seeing their needs met, out there in the market, we listen. You know, I think a lot of companies will tell you that they listen to their customers, and then they don't, or they don't actually internalize that. I think it's one of the... I know it's really a secret of Amazon, cause I think we're quite open about doing, but it's actually quite hard to do in practice, and it's one of the things that I think we do quite well that really differentiates us.
That differentiating, I think, is at that core level. Cause anyone can point to point-in-time features that is different than anyone else, but it really is that core underlying sense of listening to your customers and continuous innovation built on top of that layer of security and operational excellence that really makes a difference, and it's why enterprises often will stay with us. They may even try other clouds, and they'll often come back, and they'll continue to grow. I think that's, that's what we've built the business on and where we continue to see success.
Okay, really clear. I do wanna turn to generative AI. I think probably the biggest debate at the conference this year and recently with investors is where generative AI is going over the longer term. So can you lay out your vision for how generative AI capabilities will be adopted and utilized by customers across infrastructure, model, and application, and how you think about the market opportunities around those different layers of computing?
Yeah. Look, I am. I think you heard Lisa talking about it a little bit right before. I am incredibly excited about this technology. It is a technology that, over time, is going to completely change almost every single industry that all of us focus on and think about and work on every single day to some level, and I really think that. It's every single industry, and it's not just. You know, I think in some ways, the early splashes of generative AI, of, like, a cool chatbot that can write you a haiku poem, miss the actual value that you're gonna get. And early on, a lot of the value that companies are getting, like, efficiency gains, which are fantastic, but they're early, right? There are things like.
And we have a product with Connect, which is a cloud call center, and it's by far the most popular contact center out there in the cloud, and you know, having AI throughout that makes customers much, much more efficient. It helps them lower cost. It means they don't have to have as many agents. They can help their customers more rapidly. Fantastic. But I think that is scratching the surface of where the real value is gonna be over time. And as I talk to customers out there, as they get deeper and deeper into thinking about the core of their business, and it turns out to be very industry specific, but you're really unlocking capabilities that I think have never been possible before.
And that's it. It's a hard thing for people to get their heads around the things that are never possible before. But you talk to a pharmaceutical company that's using AI to actually invent new proteins and discover new proteins and new molecules that may be able to help cure cancer or cure other diseases and things like that. That and at a rate that's tens of thousands of hundreds of thousands more times than a person sitting there with a computer trying to guess what the next protein could look like to solve a particular disease. That is just a fundamentally different capability than ever existed before and has massive implications for healthcare. But you can go on down the list. You can think about financial markets that are using generative AI to do fraud detection.
Nasdaq is doing a bunch of this, where they look at some of their market analysis and use AI models to find fraud detection that they weren't able to do just a year or two ago. So that has fundamental improvements in how they're able to run their business. You think about. Here's a good example we had launched recently. The Central Railway in Japan is launching a new bullet train, okay? It's gonna go upwards of 300 miles an hour, so twice as fast as the current generation bullet train, which is pretty unbelievable. It's a little scary when you see trains moving that fast.
And so what they do, though, is those trains, both the rails, the electronics, and the actual cars or the trains, have a ton of sensors, and they ingest all of that sensor data and IoT into AWS, and then using SageMaker, they build AI models to predict where they're gonna have maintenance issues. They can like, little small changes in how things are operating, they can actually proactively predict weeks in advance where they might see components fail, and then using generative AI, they actually pull from a bunch of different data sources, actually give the technician advice as to how they can go do address that, and the person can go out there and quickly address any sorts of issues proactively so that the trains keep running.
Something as traditional as a train, albeit a really fast, cool bullet train, can be completely redone and made possible by some of the generative AI technologies. Again, I think we're just scratching the surface. We could probably sit here for the next hour and talk through really cool use cases, some of which are possible today, some of which are hinted at today and require the technology to continue to advance, but that's where it's going.
Okay. With this pivot towards generative AI, how does this change your go-to-market strategy with respect to the customer? And how does Amazon Bedrock factor into your broader AI strategy?
Well, there's a couple of things. One is, you know, I think, I'll say on the. There's a few ways that I would answer this. Number one is, if you rewind to about eighteen to twenty-four months ago, before kind of generative AI became front and center, in a lot of the industries were thinking about, customers were very focused on, cost reduction, and they were actually thinking about there was gonna be a recession and really thinking about how to reduce their costs. And so we spent a lot of time actually with customers, helping them reduce their bills, whether by moving to the cloud to save on CapEx or other spend like that, or even reducing their cloud bills so that they could actually afford to do new projects.
As customers shifted to generative AI, it shifted a lot of that focus, and a lot of customers are now rethinking about: How do I innovate? Because if I don't innovate, I'm gonna be left behind, and everyone else is gonna get way ahead of me. So that was number one, and it's been, and so we're helping customers think through how do they get real value, and again, not just how do they put a chatbot on their website so they can tell their board that they have a generative AI strategy, but real, actual enterprise value that they get from actually reinventing and reimagining how their industry operates, is number one. So really helping customers there.
Part of that is also moving out of just IT and thinking about how we talk to CEOs, and they think about their strategy, how we think, how we talk to line of business owners who are really thinking about the core of that business. Because if you go back to the pharmaceutical example, it's not the CIO that's worrying about how you think about protein exploration or protein discovery, it's the actual scientists and the folks that are in there, like, making new drugs that are thinking about that.
And so you have to change some of your go-to markets just to think a little bit more industry-focused and a little bit more line of business-focused, because the more you can be really in there with the customers and thinking about how this technology can really change the actual industry, as opposed to just running some back, you know, more efficiently running their back office op, IT operations. Both of those are equally important, by the way, and super important, but when you think about generative AI, it's oftentimes that line of business customer and the industry-specific customer that you really have to get into and to understand.
Okay. How should we think about the levels of capital expenditure and investments needed for AWS to achieve its generative AI goals? To what extent does infrastructure need to be re-architected for a Gen AI world?
Yeah. Well, I think, overall, AWS is, you know, in the range of, you know, software to hardware, AWS is a capital-intensive business, and so that is some of the business that we operate, right? We invest in data centers, we invest in servers, we invest in networking, we invest in that global infrastructure so that our customers don't necessarily have to. And so there is, as the business continues to grow, there is necessarily capital expenditures to grow data centers, to add power, to add servers, and so that part is just so that's part of the business that we operate in.
You know, one of the things that I'm quite proud of is that over the last eighteen years, Amazon has this kind of a learned expertise, if you will, in supply chain from thinking about our retail world. So we apply that to technology, and we think very carefully about what do we think about that longer-term supply chain, and when are we gonna need power, and when are we gonna need data centers, and when are we gonna need servers?
And so that part, I think we've learned over the last couple of decades as to how to manage that demand and how we think about having enough, you know, enough compute power for customers, so that when they wanna grow and they need their capacity, it's available for them, but we don't have too much so that we, you know, unnecessarily spend ahead of demand. And so, you know, the ramp in generative AI adds to that pressure, and I think it adds to the opportunity for us, too. But we're pretty disciplined in how that we do that, and we think that we have a pretty good model for balancing some of those expenditures along with revenue growth to capture some of that opportunity for the business.
One of the things that I think we have a benefit on is that we have been investing for more than a decade in custom infrastructure that means that we own more of that cost. And so I'll use one example. I think it's fifteen years ago, we started building our own network devices, and so instead of having to rely on third-party supply chains, instead of having to rely on third-party vendors for load balancers or networking gear, we build them ourselves, and we build them out of normal compute boxes and software on top of servers, and build our own systems that way. And then we went into building custom chips, and we build custom chips for our own virtualization technology we call Nitro, and that means that we don't have to go buy those from third parties.
That allows us to lower our cost. We long ago, you know, many of the folks in the industry, and AI in particular, kind of really leaned into InfiniBand, seeing that they thought that was the best performing network that you could get, which is true if you're going to run a small cluster that you're gonna custom configure to run your own small cluster. We saw long ago that if you really wanna run at scale, and you have to run these really large scales, and you have to operate them in a really efficient way, that Ethernet was gonna be a much better path over the long term.
And so we've invested in high-performance Ethernet for the last decade plus for HPC systems, and now we have Ethernet networking for building large training clusters for AI that will often outperform InfiniBand on an absolute performance basis and is a much lower cost, and much more efficiency, and much more operable, and it's much better uptime. And so those are some of the investments that we've made along the way that allow us to lower some of those capital expenditures for us and grow more efficiently than we maybe otherwise would have.
Okay, interesting. Can you discuss AWS's strategy around silicon partnerships?
Yeah
... and building your own custom chips for AI-
Mm-hmm
alongside those partnerships?
Yeah. Look, I think a lot of times people enjoy a narrative where they say, like: "How are you possibly doing your own chips when you have other partners who have their other chips?" And it turns out customers like choice, and we've believed that from the earliest days of AWS. And so we firmly believe that AWS is the absolute best place to run Intel, to run AMD, to run NVIDIA processors, and we think that we can offer some differentiated capabilities by offering our own processors as well. And so we actually started out with our own internal chips, which are Nitro chips that ran our whole virtualization layer and moved all the virtualization off of the core compute into a dedicated side processor.
From there, kind of built up this expertise, and we launched our very first processor chip called Graviton, and that has been a wild success. It's a general purpose processor based on Arm, and we're at Graviton4 now, and Graviton4 absolutely outperforms the best other processors, x86 processors, at a 20% lower price. And so many of our customers could get 40%-50% price performance gains, while also using less power and improving their carbon footprint using Graviton. And it's because we control that whole process. We think we know what--where it's gonna run. We don't have to build these processors to run in a general purpose environment. They're gonna run exactly in our server, exactly in our data center, exactly with our networking stack, and so we can optimize that just for our customers.
And now customers are, of course, gonna run a huge variety of workloads on once they run on it, but the actual hardware environment that it runs in is exactly just AWS, and we can optimize like crazy around that. Plus, we have a very good team that's building the chips. Then, about five years ago, we saw the opportunity to innovate in AI processors as well, and by the way, obviously I'm not sharing any secrets here. NVIDIA makes a very, very, very good processor. It's quite popular, and it has done quite well, and AWS is the best place to run NVIDIA-based GPU workloads.
NVIDIA, in fact, themselves, and I understand you're gonna have Jensen here in a couple days. You can ask him about it. We're partnering super closely to build a giant AI infrastructure for them to build their own models and to run their own test cases inside of AWS because they realize that we have the best operating environment and the best performance in order to run their own servers. We have a great partnership together, and we really lean in together, and we think that there's some use cases where our own custom processors can help customers save money. The very first one we launched was called Inferentia, and it was very focused on inference, and I can use our own company. Alexa moved all of our inference to Inferentia and saved 70% versus doing it on a standard GPU part.
Not all workloads will work better on our own processors, but we feel very bullish about the opportunity there. Trainium is the newest chip that we have out, which is very focused on large-scale training clusters for these AI models, and we feel really. We pre-announced Trainium2 that's gonna be coming out at the end of this year. We feel incredibly excited about that platform. We think that we have the opportunity to really aggressively lower cost for customers while increasing performance, and so super excited about that platform, and I think, look, there's gonna be a breadth of processor options for customers for a long time, and we think more choice is better for customers.
Okay, clear. Maybe just coming back to the competitive question I asked before, how do you view AWS's competitive positioning specifically to generative AI if you were to look at the application layer compared to the infrastructure and model layer?
... you mean like our own applications or-
Yeah.
You know, I think, here's how I think we think about the application layer generally, which is, if you think about the stack, a technology stack, if you will, the very lowest layers of the stack, we're gonna be building compute and storage and databases and data centers. And at that layer of the stack, you know, there's gonna be very few players that are gonna be at cloud, you know, hyperscale cloud, to be able to go build something like that, and we think by far AWS is the best at doing that, and we're the largest at doing that.
And then you move up a layer, and you think about the services that we have built on top of that, and you think about some of the higher level services like, maybe it's, something like an Aurora database or something like a Redshift analytics cluster or things like that. And then the very top layer, and but that's still kind of in the infrastructure space, and there's more competitors there. And our view is we want customers to run on the very best of those products that are available. And so, you know, somebody like a Databricks or a Snowflake and a Redshift all kind of are great options that customers use, and some of...
Many of them run on the AWS infrastructure, but customers pick and choose depending on their use case, and there's gonna be more of those options out there. Many of them we offer, and many of them our partners offer. Then you go to that application layer. There are, I don't know, tens of thousands, I think hundreds of thousands of startups. There's a new startup every day that's building a cool thing at the application layer. And so AWS will have a few of those, I think, and I mentioned contact centers earlier. I think we have, you know, the most popular and the fastest growing cloud contact center in Connect, and that's arguably at the application layer.
It's an area that we thought we had expertise in, and we went into that, and we've done quite well, and customers really enjoy using it, and we have AI infused into it, and it's grown, like, very well. But we also have a huge number of partners, and whether you have Salesforce or ServiceNow or Workday or, you know, a startup that was just funded yesterday building on the application layer, there are gonna be tens of thousands of these applications. And so I think that we will have many successful ones, and I'm super excited about Amazon Q, which is our conversational assistant that helps both developers and enterprises get more value out of their data, and really be more efficient in how they go about working.
We're seeing tremendous upside and tremendous growth of enterprises starting to adopt that technology. But I think we'll just be one of many thousands that are gonna be as successful at that layer, and that's part of how we operate.
Okay. You talked earlier about security being a differentiator-
Mm-hmm
... in the space. Can you give us a little bit of color on to what degree you're building security solutions that are meant for your own network and are sort of internal facing versus, elements of building security solutions for customers?
Yeah
... and what some of your priorities might be, both looking internally and externally around the security landscape?
Yeah. Look, our priority number one, and this is, you know, to answer your question, we do both, but the priority number one is the security of our infrastructure. That is things that our customers can't do, our partners can't do, we have to own that. And so that is something where we spend an enormous amount of time and have from the very beginning. And so we think about. To the extent that, you know, when we built a custom, and this is what I used to do in my old job many, like, a decade ago, is we built a custom hypervisor layer that means that there is no operator access to a compute instance running in AWS.
If you're running an EC2 instance, the normal hypervisor, there's a layer that the operator can come in and kind of manage the VMs and do things like that, and it's how a lot of the systems are interacting with your various virtual machines. We built a hypervisor that doesn't have that. It's just the only way you contact and create VMs is through APIs, and so there is no way for a human to go and log into a machine and be on that box that your VMs are running in. That is a very different security posture than anyone else runs because we thought of this from the ground up.
When we've built the infrastructure for AWS, we thought about security from the very beginning, and we continue to invest enormous amounts of effort and time and resources in securing that infrastructure layer because that is something in our services layer, and so it turns out it's one of those things you can't bolt on after the fact, and I think some of our other friends are learning that the hard way, and they're trying to figure out how to go do that, but you know, we're fortunate enough to have the team that built AWS in particular, and fortunate enough to be thinking about that from the very beginning, and it's always been a priority for us.
Now, that said, we also see security in the cloud as a shared responsibility model, and so at the application layer, the customer's responsible for securing their application, right? We're responsible for securing the infrastructure, and they have to secure their application, and so customers can lose keys, customers can leave database ports open, customers can have bad security practices, and so we also build services to help customers really understand how to go manage that, how to monitor for that, and we have teams that will help customers with best practices around that. There's also a rich partner ecosystem, and this is where partners can come in and help customers secure applications.
And so whether you have folks like CrowdStrike or Wiz or Palo Alto or et cetera, there's a number of security customers or companies out there that build great applications and great capabilities to help customers secure their own applications. We also have services that we offer on that front too, that I think are quite good, but there's gonna be a wide variety of things that customers use on that front. But that underlying security is where we spend the vast majority of our time.
Maybe just one quick follow-up. Are there any areas of focus that you think it's critical that AWS builds to for the longer term around the security infrastructure layer?
Yeah, I think... Look, I think you have to keep, there's a bunch, right?
And so I think you have to keep one of the things about security is you have to keep running cause the bad guys don't stop either, and unfortunately. And so, you know, whether you're thinking about quantum safe encryption, whether you're thinking about. I think, you know, as promising as the technology is, generative AI also opens up a pretty large surface area of security risk that you have to think about as to how you leak, and particularly thinking about how you might leak data or other things like that from various models. And so I think we continue to have to think about how we push the security boundary on those fronts.
I think you just have to. You're seeing more and more sophisticated attacks, and so I think we are always thinking multiple years out as to how we do reactive and proactive security measures to try to identify where we see patterns, where we see bad guys, where we see different things, and then we go try to build ahead of that.
Okay. We only have a few minutes left, so let me end on one sort of bigger picture one. You know, looking ahead over the next 12 to 18 months, how would you frame up the key priorities and milestones you'd like AWS to achieve? And are there any other emerging themes that you think we, as investors, should be paying attention to across the broader computing landscape?
Yeah, there's a lot. I don't know how much more longer we have, but, you know, I think a few things that I'm excited about and priorities for me. Number one is we've had that baseline of rock solid infrastructure. I am also excited about us really accelerating the pace of innovation and simplification that we can offer for our customers out there. I think that today there is such a dizzying array of things that they can pick from, that us being a little bit more prescriptive and us being a little bit kind of putting on that customer lens and saying: How can I help customers simplify some of those decision-making things and focus on what's most important for their business? And so I think there's a lot of innovation that happens there.
There's a lot of things that we can continue to take on for customers to simplify their life, so they can really focus on their own business, and I think you think about... Look, I firmly believe that as we move into this AI world, most customers are not gonna become experts in AI. Most customers are not gonna go build their own models. Most customers are not gonna spend a trillion dollars building some sort of foundational model, and most customers are gonna wanna get the benefits out of that model, and so the benefits, though, are gonna be very closely tied to what is your unique IP and your differentiating data that you have for your own enterprise, and your own workflows, and your own customers.
Really thinking about helping customers be able to get the value out of that data from the technology when it comes in a relatively easy way is one of the things that I'm most excited about doing. Helping customers get their data out of data silos and into a cloud world, where it's available to be used by some of these models. You can actually get value out of that, while also protecting that data, so that they have that unique IP that's gonna be most important to that end customer.
I think there is a ton of opportunity there to really help customers build more value for their enterprises, and if we can make it easier for them, and innovate, and help on the analytics side, and the AI side, and make that accessible to everyone while they get their data in a cloud world, I think that's how you really see the acceleration of real enterprise value that comes from these technologies, and it's what I'm most excited about, probably over the next 12 to 24 months.
Okay. With that, I think we're gonna leave it there.
All right.
Matt, thank you so much for being part of the conference.
Thank you.
Please join me in thanking Matt.