Excellent. Thank you all for joining us. My name is Keith Weiss. I run the U.S. Software Equity Research here at Morgan Stanley. And very pleased to have with us from Snowflake, Sridhar Ramaswamy, CEO. Before we get started, for important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales representative. All right. So with that out of the way, Sridhar, thank you so much for joining us again.
Thank you, Keith.
I was just saying that you were kind enough to join us. It was like a week after you first got named CEO at Snowflake. It's been a really pretty amazing year in terms of product velocity from Snowflake. So maybe just to kick off, you could talk to us about this past year. Your top priorities coming into the role, how those went for you, and kind of what surprised you, perhaps, over the past year in terms of your time at Snowflake.
Thank you. Excited to be here. Yeah, going in, the board and I and Frank had talked a lot about how we just needed to be on a different gear when it came to product velocity. So that was something that was absolutely top of mind for me. And I'd say we did pretty well. But part of what we also realized that the world of interoperable data formats, there were large changes that were taking place in the data ecosystem so that we could actually fine-tune our product strategy and aspire for a broader, bigger stake of the data lifecycle. And so I would say that was one thing we were very deliberate about. We wanted to be a player when it came to AI product. I'm very proud of how the team did that, playing to our strengths at Snowflake. We didn't aspire to be somebody else.
So I would say that part went well. The second one, which we had also agreed on, which ended up taking a fair amount of time, was how do we drive effective go-to-market with new products? So a little bit of a new muscle for Snowflake. And so we learned a lot. We experimented super, super quickly, whether it was AI or soon data engineering. I think those all led to substantial changes in how we thought about go-to-market. And the last one, perhaps this is somewhat surprising, which I hadn't quite expected, but Snowflake had also just undergone a transition year from being a bookings-oriented company to being much more of a consumption-oriented company. In practice, this is a very different motion for a sales team of like multiple thousand people.
And so there's a lot of work to get that fine-tuned, to get a sense of excellence established overall. And we started doing that, like driving excellence overall within the company, driving efficiency. We called the internal effort Get Fit. But there was more Q3 when number one and number two had settled down. And I'd say those are the three things that lead us to how we are positioned this year, which is an ambitious roadmap for product with the belief that we can drive product velocity excellently, the ability to take new things to market effectively, along with an increased sense of how can we be efficient, especially in the AI era that's unfolding. And so you see that as a guide that's pretty strong in terms of revenue, but magically can also say we can expand margin at the same time.
Right. Got it. So I think one of the investor discussions from the outside in on Snowflake, and I think what investors are getting excited about on Snowflake heading into 2025, is this combination of product velocity increase. There's more product in the marketplace that positions for these new market opportunities, whether it's AI or data engineering. But at the same time, it feels like the core business is firming. We went through this period of optimization. We went through this period of kind of uncertainty on kind of what's going to happen with data management on a go-forward basis. But this year, NRR was fairly stable. I think it was 126% NRR in Q4. Can you talk to us about that side of the equation, maybe to set the foundation? Do you feel the core business has stabilized?
What's the demand environment out there within those kind of core customers for that core data warehousing solution, if you will?
That's right. I think a lot of what gives us confidence coming into this year is what you said, which is the core business around analytics being very, very strong. It's driven by a number of things. Look, I don't think this is obviously the greatest macro environment in the world right now. No one's going to say that. But on the other hand, our ability to understand customer needs. Some of our best sales folks know customer architecture and customer priorities, sometimes better than the customers themselves because they're talking to everybody. And using that to drive value in all of the right places powered by analytics within the customer, I think has been a huge source of strength for us. And in fact, we are doubling down.
We have a team that's focused on how do we make the lifecycle of migrations much faster driven by AI, which you can think very differently about how those should operate. But that's our base of strength. And then on top of that, the slew of new products, the expanded lens and opportunity offered by things like Iceberg, which we think can be meaningful contributors at scale to Snowflake, combined with driving broad adoption in areas like AI. That's like the one, two, three punch of why we feel good from being able to deliver on a product perspective. There's a lot of work to be done within the engineering team, within the sales teams in terms of how do we continue to drive excellence, to practice what we preach in terms of what AI can do to how we do our work day to day.
There's a lot more, but that one, two, three is very strong.
Got it. Got it, so maybe shifting gears and start to talk about that sort of broader AI opportunity. From my perspective, the part of the equation that I'm very firm on, if you will, is that I think people have understood that data warehousing is a very effective and efficient way for doing analytics against your core data. And that's not going to go away anytime soon. That's going to be a core foundation. The expansion part is, Snowflake is going to be doing more for us on a go-forward basis when it comes to different data types, sort of where we are in the data lifecycle, how we're utilizing the data on a go-forward basis, so can you talk to us about kind of your vision of how does that role Snowflake is playing for the end customer?
How's that going to evolve over the next three years, the next five years?
That's a great question. I mean, I think the role of AI today has greatly increased the awareness to get the data stories straight. I think everybody understands, for example, that if you don't have data and governance right, there's going to be a big problem when it comes to how you're going to create AI applications. Simultaneously, I think unstructured data has risen in value tremendously in this age of AI because it's now a lot less expensive to go from unstructured data. Say you have a bunch of contracts and you want to extract out all the revenue percentages, the share percentages that you want, you'd need to run a project before. Right now, you can do that in five, 10 minutes of SQL or using some other tool, and so this is where we decided to lean in.
We said we want to make it easy for people to bring all kinds of data, whether it is structured or unstructured, from Drive, obviously from places like Salesforce, into Snowflake. We acquired a company. It's going to go into GA pretty soon. And then we said we can also use Snowflake as the data engineering, the cleaning pipeline. There's a lot of work that needs to happen before you can run analytics on data. You have to clean it up. But we now have tools that are going to help customers in that area. And this is where product offerings like Dynamic Tables, which is just a fancy way of saying data pipelines written in SQL, are enormously powerful. What used to be painful code writing and debugging, all of a sudden is three lines of SQL.
We're seeing real traction in all of these areas, but they're all in aid of something. Yes, absolutely. People can do analytics and machine learning on top of this data. But what we have done in the past year is create all the right primitives for people to be able to have chat interfaces to unstructured data, to structured data. And then this all comes together in our agentic platform, which we think of as revolutionizing all of the functions that we all take for granted. I don't think that Snowflake is the one that's going to be creating all of these applications, but we will own the data layer and the primitives that are then needed.
Say, if you want to rethink how somebody underwrites the loan, all of the material that they need to do in order to do a complex process like that, or decide on an insurance claim, we are very much going to be at the center of how agentic solutions for those things are being built. But that is the part of Snowflake which is conception to value driven by applications that are driven on top of Snowflake. Details like, is it structured or unstructured data, are irrelevant because we are able to do it all. I think being able to credibly offer all of this in a truly serverless fashion means Snowflake is not a hodgepodge of on-prem deployed stuff and software sales and something else. We are a managed solution.
When we say Anthropic is in Snowflake, we mean that Anthropic comes as part of the security perimeter of Snowflake. Our customers can use it without needing to sign another contract, without needing to worry about what happens to the data. That's the magic of Snowflake, end to end, but super self-contained, running at global scale, super easy to use. We feel good about how we are positioned.
Got it. Got it. So I want to dig into sort of the product velocity and sort of what's changed over the past year, maybe starting with the foundational level of kind of the data engineering side of the equation. Snowpark was in place when you came on board, but that portfolio had to be expanded out and had to be fleshed out a little bit. Can you talk to us about kind of the missing pieces that you saw when we're thinking about data engineering and what you've filled in in that stack for Snowflake thus far?
I would put that into two buckets. First and foremost is a collective realization, not just me, that Iceberg represented a massive opportunity. Coming into this conversation last year, one of the biggest questions that I got from all of you was around, is Iceberg going to be a major detractor to revenue? And our aha moment in a lot of customer conversations was customers telling us, "listen, you should support Iceberg because I can do so much more with Snowflake." We have 100, sometimes 1,000 times as much data outside Snowflake as we can afford to bring into Snowflake. It is a big unlock. So just that lens of things that we do can have vastly more applicability was our big unlock moment for us. That's how we leaned into how we were taking Iceberg to market, poured fire on things like Dynamic Tables that had been launched.
And when it comes to Snowpark, we wanted to be even more of a drop-in replacement. The number one problem that companies that run a lot of Snowpark pipelines have is their lack of debuggability, is how painful, how intertwined all of those are going to be. We had the chance to make those things considerably simpler. To me, that was like the big aha moment when it came to data engineering. On the AI side, I think it's a little bit of a different story. There, our realization was we need to play to our strengths. We are not OpenAI. We don't pretend to be OpenAI. We need to figure out how to work with people that are world-class when it comes to creating foundation models. We need to lean on where can we add value. So we created products for unstructured data, for structured data.
We figured out how to stitch those together. We said we're going to have partnerships with the best model makers in the world. So I think, again, that positions us with our strengths is this vast data estate, the right capabilities for people to take advantage of AI on top of this data. I'm also very open with people about we will offer APIs wherever they want. This is why we did the partnership with Microsoft, where we said things like Cortex Agents can be surfaced inside Copilot, can be surfaced inside Teams, inside Power BI. We don't mind because we want to be that consumption layer for all data and what people are going to get from it.
Got it. Got it. Digging in on the Iceberg Tables debate, if you will, a lot of investors at the beginning of 2024 and most of the way through 2024 were worried about Iceberg Tables. It was the potential for data to come out of the system. I would say even more so is the potential for what we thought of as one of the lock-in aspects of any kind of data management system or data warehouse was a proprietary kind of data format. If that goes away, does that sort of increase competition for any given workload on a go-forward basis? And you guys were a little reticent at the beginning of the year and talked about potentially Iceberg being a little bit of a drag, but then that went away.
If something changed in how you guys were thinking about or sort of communicating to us about Iceberg, you sound a lot more comfortable today that this is truly an expansive opportunity, that those workloads are still going to come to Snowflake on a go-forward basis.
To be honest, I think it came from a lack of product clarity. It came from a lack of thinking, again, not a lack of thinking, not enough attention paid to what are our strengths, what is our reason to win. Snowflake's biggest strength is we are the best compute engine at scale without any question. And as I said, we run a fully managed world-scale service. There are not many people that can do that. We are also the only player that honestly makes cloud abstractions go away. Many of our customers have their primary running on AWS, and they have a secondary copy running on Azure, and they can do failovers in a matter of minutes. So we have a lot of strengths.
We need to lean on those strengths and not, for example, falsely think of data lock-in as the moat that is going to protect our revenue. At the end of the day, all of this data is sitting on the cloud. If somebody really wants to do a migration, it's not like we can stop them. And so I think it is that leaning in, it is that confidence, which has to be built bit by bit that the company went through. And I think I've said this to some of you. I say this to the team repeatedly. Iceberg's history is one that Snowflake gets to write. We bet on the format. We know the opportunity. We now know that we can go seize that opportunity, then worry about what fraction of, take your pick, our 10% of storage revenue is not going to be in Iceberg.
Our objective is to make that part insignificant.
Got it. Got it. On the data engineering side of the equation, I think something that we hear a lot of positivity around when we talk to customers is Dynamic Tables. Data engineering, I think you guys talked about it last quarter getting to a $200 million, might have been two quarters ago, $200 million run rate with Dynamic Tables being a big part of it. What was it that that's pretty quick sort of expansion to a $200 million run rate for any product? What was it that resonated so well with the customer base that enabled that type of penetration?
It's the simplicity. Sadly, I'm probably one of a few in this room. I've written data pipelines. I've used Airflow to try and orchestrate them. It's a nightmare just to get work done. What we do with Dynamic Tables is you write a few lines of SQL. We figure out all the dependencies. And what used to be this web of difficult-to-debug interdependencies now just works out of the box. That, again, it plays to the strength of Snowflake, which is we make the complex easy. Don't get me wrong. We have more work to do in this area in terms of making it even easier to write these transformations, making it easier to visualize them. We have a team that's working on how do we make data pipeline creation a whole lot easier than what it was before.
But it's the trademark simplicity and the ability to work at massive scale that's at the heart of why products like Dynamic Tables have been a pretty big success for us.
Got it. Got it. I want to shift gears and talk a little bit about the Cortex portfolio, maybe starting with Cortex Analyst in that this is something that's really started to hit an inflection point in our customer conversations. It sounds like the integration with Claude from Anthropic was something of a catalyst for that in terms of really sparking the imagination of a lot of customers. Can you talk to us about what customers are doing with the product? What are they using Cortex Analyst for, and how does that evolve over time?
Look, for as long as data has been around and analysts and BI tools and executives have been around, there's always been this unhappy dynamic of this table is great, but I want it sliced by this other metric because it just popped into my head. One way to visualize this is to say we live in a hugely multidimensional world. At the end of the day, a dashboard is a two-dimensional thing. It can only show this axis and this axis, and everything else, you have to figure out how to ask follow-up questions. And what Cortex Analyst does is it makes business data directly queryable in natural language by business users. People that have written BI tools have been wanting to do this for 50 years. In sort of somewhat contrary fashion, as a product team, we decided to focus on reliability.
Instead, we could have said, ha, here is something that can change BI. Here is like a cool demo that I can build for you. The problem with all of the cool demos for this kind of a product is they don't handle the edge cases. They don't handle the corner cases, stuff that you and I take for granted whenever we use an application. The thing does what we ask it to do cannot be relied upon when it came to the first generation of applications that were built for querying structured data. In fact, I would talk to customers that would be like, oh, we've built this. And I would ask them, so what's your reliability? Do you measure it? That's when they'd be like, yeah, it doesn't work that well, even with simple schema. So Cortex Analyst, sort of that's what it does.
But more importantly, it lays the foundation for things like an agentic platform because all of a sudden, underneath it, this model has access to reliable tools. And if you're thinking about using eight different tools in order to answer some question, if you have high error rates on each of these steps, they just compound. That tool doesn't become very useful. And we built the first version of Analyst using the best open-source models that we could find. Again, the gap in quality between a world-class model like the Claude 3.7 and what was possible, say, with the best ones from others like Mistral, again, is night and day. And again, it's that combination of playing to our strengths. We know structured data incredibly well. We created a product that could truly interrogate structured data reliably.
There come the partnerships with OpenAI and Anthropic that amplify these core strengths. That's why having these out of the box without our customers having to think about it is, again, a big deal for us.
Got it. So when I think about Cortex Analyst and kind of abstract back to the sort of broader vision of Snowflake, from day one, one of the big opportunities of Snowflake was to democratize the access to data. And I'll be honest, as not a data analyst, I've never developed data pipelines. I've never been able to use Snowflake.
You didn't even know Snowflake existed. You had some other tool.
I did the IPO, so I knew it existed. Yeah. Does this sort of now sort of unlock that promise? Can I start using Cortex Analyst? I don't know if they're going to give me the budget for it, but can I start using Cortex Analyst and start interrogating using natural language processing? And does that create that big unlock of that now the data is going to be in the hands of all the business users, not just the business analysts?
It's a great question. I've worked in consumer products most of my life, and the first rule for a consumer product, at least the way I learned it at Google, was thou shalt not be disintermediated. It was like we would fight tooth and nail to be the interface that any of our users touched. God help the poor startup that said, I can put a better layer on top of you, and so it's super funny for me to come to Snowflake and go like, so people use your data, how? Oh, with a tool written by a third party. It was just wild, but as you know, you can't aspire to become someone else just because you want to. There has to be a reason. I think AI is a massive disruptor of things like BI, of how data is going to be consumed, honestly, of even applications.
At the end of the day, applications are like fancy data pumping machines. They take your data. They take some other data they have. They produce some output state. So that's the opportunity that I see when it comes to what can AI do. And that's the opportunity for us with Snowflake Intelligence. It is early, but we showed an early version of the product to our sales team. It had data from Salesforce, our consumption, our own consumption data, all of the enablement material that they need day to day to go pitch new use cases and learn, all of it from one interface. They were like, oh my god, I need this for my day-to-day use every single day. That's the promise. We just need to execute on it really, really quickly.
Got it. Got it. Two other products I want to dig into. One, Cortex Search. Another product sort of opening up the unstructured data side of the equation for you guys. Can you talk about sort of the evolution of how you approach customers with a product that's about the unstructured side of the equation when historically, Snowflake has always been about the structured side of the equation? How do you get Cortex Search in there and get seen as a viable sort of vendor for that part of the equation?
Great question. We start out by telling them that this was technology that we used to build Google Search and that it was built by a core set of engineers that had, in fact, built Google Search, and as far as credibility goes, that goes a long way in just the strength and scale of the system. It's true, and on top of that, we've made that product, again, incredibly easy to use. Anyone that's an analyst knows some SQL. I'm one of them. You can set up a chatbot in five minutes flat, but it can also scale literally to billions of documents. It is that combination that's serving us well. We are very deliberate. We are focused on a lot of marquee customers that are going to operate at very heavy scale and then can be the testimonial provider for us as we go to other companies.
But from a practical perspective for Snowflake, this is a huge unlock because unstructured data is part of our everyday lives. If you want to create an agent, I mean, think about what you do. You read your email. You look at some table. You look at some report, and then you take some action based on it. So we think of this as a core capability. It's also similar to Google Search. It's an algorithm that learns from the work that people do. So things like using popularity signals into how things are ranked are very much a part of search. Once we go through that, the heritage, what it is capable of, and the level of sophistication that we provide in terms of combining vector indexing, traditional information retrieval techniques, having a feedback loop, it's a pretty easy win for us.
The turbocharge for us is when we also have connectors that can bring in data from all kinds of unstructured data sources, feed it naturally into Cortex Search, but also do things like obey permissions. Anytime you talk to somebody about indexing SharePoint, the first worry that comes to their mind is like, oh my god, I'm pretty sure my permissions are wrong. The wrong person will look at the wrong doc. We can take care of that out of the box. Again, it feeds into our strengths of easy-to-use product, governance taken care of, and completely managed so you don't have to worry about details.
Got it. Got it. Last product that I want to dig into on the Cortex portfolio, Cortex Agents. So you announced that a few weeks ago. That is in preview now, allowing users to orchestrate some of the retrieval tasks that they're doing across the data. Can you talk to us a little bit? What does the customer use case for agents look like, and how does that expand the ecosystem for Snowflake?
I mean, as I said, every function that we as human beings do involves looking at a set of unstructured data sources, looking at structured information, using that to come up with something new. The same archetype can be tailored for how a salesperson approaches their job, where they come in. They ask, what has consumption been like over the past seven days? What's the status of all the open use cases that I have? And if I want to pitch Cortex Search to this customer, who are the other customers that are just like them that we have already sold Cortex Search for? But this flow repeats itself over and over again.
So at a core level, you should think of Cortex Agents as a way to orchestrate different data sources, but it is also now augmented with tool calling so that you can say, hey, I want to update the status of this use case based on something that happened externally. It does that out of the box. Or you can generate a pitch that you want to submit to somebody because there's a world-class model that is underneath, super flexible in terms of what it's able to do. But it is this combination of structured, unstructured data in aid of a larger plan that you want to provide a particular persona. And for what it's worth, we don't expect to be the one that are writing all of these applications. So we don't know anything about underwriting.
I fully expect, for example, that we will work with Ernst & Young and say, in these kinds of insurance cases for these customers, this is what can be best of breed in terms of what a Cortex Agent is going to look like. That's the next phase of evolution where we are working with partners to bring things like this to our customers.
Got it. Got it. I want to talk a little bit about the monetization side of the equation. Monetization with Snowflake used to be dead simple. It all drove back to driving more usage of the data warehouse. It was all about sort of that credit rate card with the end customer. These products are priced on a per-token basis. So it's an expansion of kind of how you're looking to monetize. Why is that the right approach? And ultimately, does it become something of like a pass-through tax in terms of what you're charging from the underlying model? How's that going to work out over time?
This is monetization. Monetization strategies is something I'm fascinated by, you know this, Keith. At the end of the day, your customers would rather pay you for outcomes. But as you know, outcomes are hard to judge. They're sparse. This is a little bit like pay-for-conversion products that took a full 10 years to launch at Google in terms of how good they had to become. So Snowflake's model is a little bit of an intermediate one where we tell our customers, you pay for used capacity. You don't have to basically sign up for a subscription, which a lot of us, for many products that we use, regardless of whether we use it for two days or 365 days, we have to buy the subscription. That's the only model that's offered. Snowflake is pretty unique in that it said it is metered consumption. You pay for the metered consumption.
A way to think about things like our AI pricing is it's just metered consumption done in a way that is familiar to the industry. The positive sides of this, again, are that our customers don't have to make any commitments. I tell people, you can build a chatbot on Snowflake for $10 flat. It doesn't cost that much money. If you drive a million QPS through it, then absolutely, you will pay non-trivial amounts of money, but hopefully, you're getting utility for being able to drive sort of that much traffic. And also, the other thing, I get lots of questions about AI margins and inference margins. I'm personally focused sort of a lot more on how do we drive broad adoption of our products, begin to create lots and lots of value for our customers because I'm fairly convinced that there's going to be hardware innovation.
As you folks know, there are tons of companies that are making inference chips. There's going to be model innovation. We have a team that focuses on making inference much, much better. And so there's like a slew of techniques. There's also model size improvements and things like distillation. So I feel very good about margin because I think all of these factors will drive basically inference costs down to the ground over some period of time. And having a pay-for-token sort of model is the simplest thing that we could set up out of the gate.
Got it. Got it. I want to talk a little bit about partnerships. You guys recently announced the exciting new partnership with Microsoft and OpenAI. You're embedding OpenAI directly in Cortex on the Azure AI Foundry. Two kind of related questions. One, it seems like you've brought in more of an expansive view on these partnerships and more of a willingness to do these first-party partnerships. So question one is, why are these first-party partnerships important and sort of why are you expanding the purview there? And then two, the Microsoft partnership in particular with OpenAI, what's important about that one?
I mean, look, we've had absolutely. I take much more of a partnership lens to how we need to work. I helped launch Google Android Pay, now Google Pay with the Google Payments team. I tell people that this was the hardest exercise in partnerships and diplomacy that I will ever go through. It's like convincing Visa and Mastercard to do something next to impossible. But there was value. At the end of the day, I feel proud every time I tap in the New York subway and actually get in. So I think partnerships are a way to create a value add. We've been slowly working with the folks at Microsoft. We know many of them personally. But the simple truth is we are in the midst of this incredible secular migration from on-prem systems to cloud systems.
The massive tsunami that drives all of these partnerships is the realization that us working together creates a huge amount of value. We will still compete on many fronts, but Azure plus Snowflake is way better for a set of customers, and so what Microsoft has been very good at doing is aligning sales incentives so that that part becomes friction-free, it also helps that they aren't exactly happy with their partnership with the other player in this space, and having more of a sense of balance is beneficial to them as well, and then on the product side, Microsoft, again, believes in product partnerships because they have products that reach the end user, and so this is why we are happy to have Cortex be represented, whether it's the Copilot, whether it is Teams, whether it is Power BI, so I think it is absolutely a win-win.
It shows in the numbers. I think their fraction of our revenue went up by like five full percentage points over the past couple of years. There's a lot more to go. Again, we have this wonderful partnership with AWS as well. These are all massively value-creating for all of the companies that are involved. I'm very much leaned into these because I see these as win-win.
Got it. One last question I'm going to try to sneak in here. Coming off of the Q4 conference call, I think one of the things that investors really picked up on was your confidence in acceleration into the back half of the year. Is it the Microsoft partnership? Is it Cortex Analyst? Is it any specific product, or is it just the accumulation of product velocity that gives you the confidence in that back half acceleration?
It's a lot of hard work, Keith, and it's across the board. I'll call out a few things, but at its core, we are about product strategy and velocity. Strategy, I feel very good about, but as I say repeatedly, strategy without execution counts for nothing. That's where the product velocity in this rapidly moving world comes into play. I would say we are a lot more confident about our ability to create value with our sales teams, both the regular sales teams and the specialized ones for taking new products to market. And we are doing that for partners also, by the way. That's like a huge list of ISV partners, whether it is a Blue Yonder or a Fiserv or RelationalAI . The list goes on and on, where we drive actual top-line revenue for them. So that go-to-market motion gives me a lot of confidence.
And then in terms of big-ticket items, absolutely, it's on the product side. It's data engineering and AI. On sort of the new market side, we have a dedicated effort in the U.S. federal space, for example. And especially in these days of DOGE, we actually think that we can create a lot of value there. And then a lot of meaningful partnerships to be had with the GSIs. I'm a guy that goes to these GSIs and says, I'd like to help you make $1 billion. How can you help me? And again, in the world of DOGE, that's an attractive offer, and I genuinely think that we can do those. It's confidence across all of these that make us feel good about what's upcoming.
Sridhar, congratulations on all of the success over the past year.
Thank you.
Looking forward to hosting you again next year for year two at the TMT conference .
Thank you.