Snowflake Inc. (SNOW)
NYSE: SNOW · Real-Time Price · USD
145.10
+4.78 (3.41%)
Apr 27, 2026, 2:56 PM EDT - Market open
← View all transcripts

Status update

Apr 16, 2026

Nick El-Rayess
Senior Product Marketing Manager, Analytics, Snowflake

AI. How to enable real-time high-concurrency analytics to accelerate the decisions in sub-seconds. How semantic views and AI-powered autopilot are redefining governed data access for business users. Of course, the power of SAP Snowflake, zero-copy integration for enterprise-wide analytics and AI. Finally, how to bring AI directly to your data across lakehouse and hybrid architectures, and more. We are excited to dive right in. To lead our keynote and show us how Snowflake is making AI-powered analytics a reality for every organization, please welcome Thuy Le and Carl Perry. Thuy, I will hand it over to you to kick us off.

Thuy Le
Senior Regional VP, Applied Field Engineering, Snowflake

Hello, everyone. We are thrilled to walk you through the next chapter of the AI Data Cloud.

Carl Perry
Senior Director, Product Management, Snowflake

That's right. Today, we're going to explore how you can break free from legacy constraints and bring AI directly to your data without any of the complexity you've come to expect from traditional migrations.

Thuy Le
Senior Regional VP, Applied Field Engineering, Snowflake

Imagine if every employee, not just analysts, could ask deeper questions without waiting in a queue. If they could see the full picture instantly, trust the data that they're using, collaborate without copying data across systems, and turning insights directly into action. Increasingly, imagine if AI were simply part of the everyday analytics, not something experimental or siloed. This is the promise of a modern data platform, and it's what legacy environments make incredibly difficult to achieve.

Carl Perry
Senior Director, Product Management, Snowflake

Snowflake brings AI directly to analytics, so teams don't have to move data, manage separate environments, or compromise on governance to get value from AI. With Snowflake, customers can analyze all data types, whether it's structured or unstructured. With unstructured data, you can do things with text, images, and audio, all using a familiar Snowflake SQL language that you know and love on a single platform. We also make AI accessible to even more people. Natural language interfaces enable business users to ask questions and get trusted insights while automatically respecting the same security and governance policies as the underlying data. This means faster answers with no increase in risk. For data and AI teams, Snowflake accelerates the path from model development to production, including our autonomous data science capabilities that improve productivity when you're building the models, and then reduce operational overhead once they're in production.

Customers choose Snowflake to build an interconnected enterprise because we're easy, connected, and trusted. A fully managed platform that adapts to your teams, enables frictionless data sharing across the organization, and delivers interoperable, governed AI at scale across clouds.

Thuy Le
Senior Regional VP, Applied Field Engineering, Snowflake

Enterprises often think about modernization as a single migration, or even a single use case. In reality, modernization is a journey. One that spans every analytics workload across all the businesses. Snowflake provides a single platform to support the entire journey across all your data, wherever it lives, across clouds, across regions, and formats. Most organizations begin with data warehouse migration, moving off legacy systems to improve performance and to reduce operational overhead, often with minimal refactoring. From there, they expand into AI-powered BI, using built-in AI to move faster from questions to insights. As data grows in volume and complexity, they evolve into a lakehouse analytics, bringing together structured, semi-structured, and open data without copying or moving it. Ultimately, they enable applied and interactive analytics, embedding insights into applications, and enabling thousands of users to query data with consistent performance.

All of this runs on a fully managed AI-ready platform, so teams don't have to think about infrastructure or scaling, or tuning. They stay focused on delivering data products that move the business forward.

Carl Perry
Senior Director, Product Management, Snowflake

Here's the shift. Migration is no longer just about modernization. It's how organizations unlock AI and real transformation. Every business today is trying to bring AI, machine learning, and real-time analytics into their applications and workflows. None of that works without the right foundation, and that foundation is the Snowflake AI Data Cloud, a unified platform that brings your data, applications, and AI all together. For most organizations, getting there means breaking free from legacy environments, things like on-prem systems, or even cloud environments that require you to manage and operate them 24/7. This ends up increasing your costs and slowing and stifling your own innovation. This is where migration becomes a catalyst. It frees you from legacy constraints. It accelerates time to value through automation and makes you AI-ready from day one. Because let's be honest, migrations have never been the end goal.

It's just the starting point for everything that comes next. How do you actually make that transition? At the core of Snowflake's approach is SnowConvert AI, our AI-powered engine that automates the most complex parts of a migration. SnowConvert AI delivers an end-to-end experience, from assessment through full conversion of your data ecosystem, including schemas, ETL pipelines, and BI reports, across platforms like Teradata, Oracle, SQL Server, and many more. What's fundamentally changed is actually how this works. SnowConvert AI is no longer just a tool. It's an intelligent, AI-driven system, deeply integrated with Snowflake Cortex. That means migrations are no longer manual step-by-step efforts. They're orchestrated, automated, and increasingly, they're autonomous. With AI-powered code conversion and testing already in production, and agentic workflows emerging, you can move entire workloads faster with greater consistency and far less risk.

The outcome is simple, faster migrations, lower risk, and a direct path to becoming AI-ready on Snowflake. Finally, we're thrilled to announce Datometry's migration solution, which makes moving from legacy data warehouses like Teradata to Snowflake easy. This will be a public preview soon.

Thuy Le
Senior Regional VP, Applied Field Engineering, Snowflake

That brings us to the biggest challenge customers face, getting started. Migration to the cloud isn't just a technical problem, it's a monumental problem. It's untangling years of legacy systems, managing risk, and getting that first workload into production. That's why we built Snowflake LiftOff. LiftOff is a migration program designed to help you break through the gravity of legacy technical debt and to start your journey to the Snowflake AI Data Cloud. It's built for organizations at the starting line. Whether you want hands-on support from Snowflake experts and partners, or you want a flexible digital-first experience, you can begin where you are to build momentum quickly. This isn't just about planning, it's about making real progress immediately. With LiftOff, you can stand up a secure, scalable foundation. You can define a clear migration path.

You move forward with built-in governance and risk controls, all powered by the Snowflake expertise with AI-driven tools like SnowConvert AI. Snowflake LiftOff is your launchpad for full migration with no upfront costs. You can stop planning and start launching.

Carl Perry
Senior Director, Product Management, Snowflake

To truly unlock AI-powered BI, we have to solve a fundamental problem. That's fragmentation. Right now, your AI teams, your application developers, and your BI analysts are often all working in silos. They all have the same goal with data, but they aren't speaking the same shared language. For example, when your AI team deploys an LLM into production, they quickly realize that general human knowledge just frankly isn't enough. An LLM cannot infer your specific enterprise schema, or frankly, the complexities of your business logic that makes your company unique. Without a semantic layer, an AI is just guessing at your data and what you want to ask. At the same time, your BI teams are struggling with the self-service gap. You cannot build reliable dashboards or enable true self-service for business users without a layer that defines what your metrics actually are and what they mean.

Today, every single BI tool tries to solve this independently, creating this split-brain architecture that leads to conflicting numbers and frankly erodes trust. This is where Snowflake semantic views change the game. We're moving the semantic layer directly to the data. By building the business logic where the data actually lives, we ensure that security and semantics are tied together. When your logic lives with your data, your governance travels with it, whether it's being accessed by a dashboard, a custom app, or an AI agent. By unifying these layers, we ensure that your AI, BI, and custom applications are all reading from the same playbook. We aren't just organizing data, we're creating a unified front. When every tool speaks the same language, your company moves faster, your AI becomes more accurate, and your data remains governed from the data layer all the way up to the end user.

Thuy Le
Senior Regional VP, Applied Field Engineering, Snowflake

Why does this shift matter now? It comes down to three things, speed, trust, and control. First, speed. Instead of rebuilding pipelines for every dashboard or AI agent, you define metrics once in Snowflake, and you reuse them. That frees up analysts to focus on outcomes and not on fixes. Second is trust. Executives expect consistent answers across all dashboards and AI. By grounding in LLMs and validated logic, we ensure accuracy, and we reduce hallucinations. Third is control. Centralizing logic in Snowflake eliminates fragmented definitions, it simplifies auditing, and it reduces maintenance. The bottom line is very simple. You cannot have an AI strategy without a semantic strategy, and today, we're going to build it.

Carl Perry
Senior Director, Product Management, Snowflake

In the age of AI, applications and agents generate massive amounts of concurrent queries, making sub-second performance at scale essential. Snowflake Interactive Analytics, powered by interactive tables and warehouses, delivers that with a unified serving layer that enables real-time insights on fresh data at scale without adding new systems at a great price- performance.

Thuy Le
Senior Regional VP, Applied Field Engineering, Snowflake

Snowflake has continuously expanded geospatial capabilities over the years. Now, with Cortex Code and Snowflake Intelligence, we're using AI to help teams turn location data into real-time insights and decisions, making it faster and a whole lot less complex. Later today, you'll hear directly from our customer, San Diego Airport, on how they're using geospatial data in Snowflake today to help solve key business challenges.

Carl Perry
Senior Director, Product Management, Snowflake

One important message I want you to take away from today is that Snowflake is interoperable. You have a ton of choices. You can bring your data directly into Snowflake, or if you already have a lakehouse where Iceberg data sits, you can connect Snowflake directly to that data. Here's what this means. It's centered on three core pillars. The first one, connect the data in place and have enterprise-grade security. There's no need for duplicating data, no ETL pipelines. Plus, you get built-in agentic intelligence, which makes it easy to query and explore all your lakehouse data. Finally, it's all governed through Snowflake Horizon, delivering enterprise-grade security across all of your data assets. Second, it lets you focus on innovation. Snowflake's elastic compute scales on demand and suspends when idle. No provisioning, no manual management, no tuning.

Snowflake's multi-cluster architecture eliminates contention so that your BI, AI, and other engineering workloads can all run concurrently, and it just works, so that your team focuses on delivering insights, not on firefighting. Three, it helps you supercharge decision-making. You can securely share governed data with your partners and customers. No need to do ETLs or copies. You have full Iceberg support for all of these scenarios. Your security policies stay intact across clouds and regions, turning your data into collaborative data products that connect your business ecosystem.

Thuy Le
Senior Regional VP, Applied Field Engineering, Snowflake

Today, you can unify all your critical enterprise data sources, including transactional databases, ERPs, CRMs, and SaaS apps, into a single Snowflake AI Data Cloud platform. SAP Snowflake is an SAP Business Data Cloud extension that provides a fully managed, comprehensive data and AI capabilities, giving customers greater choice and flexibility while simplifying their data landscape.

Carl Perry
Senior Director, Product Management, Snowflake

Customers can now harmonize SAP and all their other enterprise data at scale, while optimizing for total cost of ownership across all of their workloads. Leveraging zero-copy, bi-directional data access, data and AI teams can work with the semantically rich SAP data in real-time, without the added cost and complexity of ETL pipelines. This unified data foundation enables them to build AI and machine learning applications fueled by the trusted SAP data products and grounded in the context of all of their mission-critical data, ensuring accurate, reliable, and, most importantly, trustworthy AI outcomes. The companies are partnering to deliver two offerings to market. First, for new Snowflake customers, SAP Snowflake brings our full AI Data Cloud to life as an SAP extension solution. For existing Snowflake customers, you can take advantage of SAP BDC Connect for Snowflake.

This is a bidirectional integration, which brings all of your mission-critical SAP data into Snowflake with zero-copy.

Thuy Le
Senior Regional VP, Applied Field Engineering, Snowflake

We are excited to kick off this event with this keynote session, but your journey is now starting. We will deep dive in all the topics that we covered here in this keynote. Stay tuned and modernize your data estate with AI.

Nick El-Rayess
Senior Product Marketing Manager, Analytics, Snowflake

That was amazing. Thank you, Thuy. Thank you, Carl. Now, it's time to go deeper into each topic Thuy and Carl shared. Let's start with the migration with AI. Liam, over to you.

Liam Sosinsky
Senior Product Marketing Manager, Migrations, Snowflake

Welcome to the session. Today, we're going to talk about how organizations can accelerate data platform modernization by automating migrations with SnowConvert AI. Migration has traditionally been one of the biggest barriers to innovation, but with the right approach, it can actually become a catalyst for moving faster with data and AI. My name is Liam Sosinsky, and I lead product marketing for migrations and modernization here at Snowflake. I'm joined by my colleague, Federico Zoufaly, Director of Product Management for Migrations and Modernization, and together, we'll walk you through the challenges organizations face with legacy data platforms, and how Snowflake is helping automate and accelerate migrations to unlock faster innovation. Let's start by looking at the reality many organizations are dealing with today. For most organizations, the biggest barrier to innovation isn't lack of ideas, it's the complexity of their existing data environment.

Most companies today don't have a single unified data environment. Instead, they have many different operational systems supporting different parts of the business. Across multiple lines of business, departments, and teams, you'll find systems like CRMs, order and execution management platforms, ERPs, general ledgers, client portals, and market data applications, each generating its own data. Underneath all of that, the data typically lands in multiple warehouse environments. Some are still on premises, others are earlier- generation cloud data warehouses, and many organizations are managing several of these platforms at the same time. The result is a highly complex ecosystem. Traditional data warehouses rely heavily on manual and fragmented processes that integrate, transform, and manage data across these systems. That complexity creates significant operational overhead. It increases risk because manual processes introduce errors. It drives higher costs because teams spend an enormous amount of time maintaining pipelines and legacy code.

Most importantly, it becomes an innovation barrier, slowing down the ability to deliver insights, scale data initiatives, and support modern AI workloads. Instead of focusing on innovation, teams end up spending most of their time simply maintaining legacy environments and keeping the lights on. That operational complexity doesn't just make systems harder to manage, it comes at a very real cost. The reality is that many organizations are paying a hidden tax for legacy data environments. According to Gartner research, poor data quality alone costs organizations an average of nearly $13 million every year. As AI adoption accelerates, the risks are growing. Organizations are introducing synthetic and AI-generated data faster than governance frameworks that can keep up, creating new challenges around data quality, trust, and compliance. This isn't just an inefficiency.

It shows up as slower decisions, higher operational costs, and increasing risk for the business year after year. If legacy environments are creating complexity, cost, and risk, the next question organizations start asking is: How do we know when it's time to modernize? One helpful way to think about it is through a few key questions that tend to reveal the clearest signals. First, can your data platform deliver consistent, high-quality analytics at scale as teams rely on data? Second, how much time is your team spending managing workloads, tuning queries, handling queues, or firefighting bottlenecks instead of focusing on innovation? How can your platform scale seamlessly as data volumes and new use cases grow? Can teams securely share and collaborate on data without copying or moving it, which often adds more complexity and risk?

Increasingly, the most important question is: Is my environment ready for modern AI and governed data access without overpaying for unused capacity? Because AI requires flexible compute, strong governance, and the ability to experiment and scale efficiently, something many legacy platforms simply weren't built to support. Organizations need a modern foundation built for the way data and AI work today, and that's exactly what Snowflake's AI Data Cloud provides. Snowflake brings together all your team's tools and data onto a single platform so you can unify structured, semi-structured, and unstructured data in one connected environment. This unlocks three critical outcomes. First, you can scale growth. Snowflake helps move AI beyond isolated experiments and into everyday business workflows, delivering insights directly to the teams that need them. Second, it increases accuracy. Your business logic and governance are applied consistently across your data and tools.

AI and analytics reflect the real operational context of your organization. Third, it establishes trust. By bringing data together into a single governed foundation, every insight is built on trusted, secured data. The result is a platform that eliminates tool sprawl and enables organizations to move faster from experimentation to real business outcomes with AI. While the destination is clear, the biggest challenge for many organizations is getting there, because migrating legacy data platforms has traditionally been slow, manual, and risky. Despite the promise of the cloud, migrations often involve thousands of lines of legacy code, complex dependencies, and unpredictable timelines. Without the right approach, these projects can take far longer than expected, delaying the innovation organizations are trying to achieve.

With that, I'll hand it over to my colleague, Federico, who will walk you through Snowflake's approach to migrations and how we're leading the way in speed, accuracy, and automation.

Federico Zoufaly
Director of Product Management, Migrations and Modernization, Snowflake

Thank you, Liam. Before we dive into our tooling approach, it is important to review the taxonomy of a migration project. Project is a key word here because every migration should be treated as such, starting with planning and design, making sure that all of the stakeholders are involved from the beginning. A lot of times, we see customers planning just with the IT department and forget or involve the business users later on in the migration. This is not correct. We need to have a united front between the technical folks and the business users to make sure that the migration is actually complete. When we do a data warehouse migration, we have to migrate the database itself, tables, views, stored procedures, functions, et cetera.

We also need to look at the ingestion processes, how is the data getting into the data warehouse, and then how is the data being consumed by our business users. Remember that there is consumption that is standardized through some reporting tools, for example. A lot of times, business users have their own ad hoc queries that also need to be migrated and made available so that they can run against Snowflake in our case. Always, whenever there is a migration project, don't forget about the data validation and data testing and testing in general, because that is typically the longest part of any migration project, and it should be planned for from the beginning.

As I always say, the migration is not the end of the journey, it's just the beginning of the journey, and it's going to allow you to really realize the value of Snowflake moving forward. Snowflake has a track record, and we've migrated over 1,400 distinct accounts over the years. For example, Globe migrated their complex on-prem data warehouse to Snowflake in just 53 days, achieving an 84% reduction in estimated annual cost. AT&T reduced their operational cost by 84% by adopting Snowflake. This success is proven by thousands of customers, including Comcast, Citi, and PepsiCo. Now let's talk about the actual tools. At Snowflake, we have SnowConvert AI as our umbrella branding for all of the functionality that we deliver to help customers migrate from their legacy platforms into Snowflake.

At its core, SnowConvert AI is a transpiler, and the initial focus was on migrating the data warehouse schema. Again, migrating the table definitions, views definitions, stored procedures, so that their syntax and semantics matches the Snowflake syntax and semantics. We've been expanding SnowConvert AI so that we are also now including for some of their source platforms an end-to-end data migration experience, so we can connect to the source databases, extract the data, and actually migrate it directly to Snowflake, and then provide a data validation framework to make sure and verify that the migrated data actually matches. We've been adding artificial intelligence features, starting with assessments, but then also verification and repair and some testing that I'm going to discuss in just a minute. The results of our continual investment in these toolings are dramatic.

SnowConvert AI delivers an average conversion rate of over 95%, resulting in an average acceleration of 88% in code migration timelines. We've already converted over 2 billion lines of code, 46 million database migration objects, and more every day. Now let me talk to you about our vision, which is what I'm going to show you today in our demo. Our vision is to expose all of the functionality that SnowConvert AI has as an agentic process that can run within your preferred agentic code tool, like Cortex Code in the case of Snowflake. The idea is to have a process that is orchestrated by agents that actually works, starts with the planning of the migration, then performs the code conversion, the unit testing, eventually the data validation, the data migration, and then the actual data validation at the end.

All of these tracked in a way so that you always know the status of the migration project of each one of the objects that is being migrated. You can also achieve what we call incremental migrations or partial migrations, so that you can first migrate one use case and then add the next use case and so on and so forth. Our assessment tool will help you plan for that by dividing the migration in waves. Let's also see how the AI can really help with the accuracy of the migration.

One of the very interesting features that we've added, in addition to the overall orchestration of the migration by agents, we actually convert the source code, and then we are able to automatically generate unit test cases that can run both on the source platform and on Snowflake, and actually compare the results between the original source code and the converted source code and make sure that the results match. As an example, I can have a stored procedure that runs in SQL Server, and to run that stored procedure, I definitely need potentially some views, some tables, maybe some other stored procedures deployed. All of that is automatically generated into a test case by the artificial intelligence, and then the same test case is generated for Snowflake.

Some test data is also automatically generated, and then I can execute the two stored procedures and verify that the data matches. This has proven to be a lot of acceleration in the overall migration process since, as I said before, testing and validation are one of the longest parts of any migration project. Okay, now let's go to the demo. What I have here is Cortex Code running on a terminal on my machine. I just ran this session before this talk because it takes 20 minutes to run completely. I just want to show you how the agent is working to perform a SQL Server migration of a SQL Server database that I have in the cloud available to Snowflake. This is what I told my coding agent.

Using the migration skill, generate a comprehensive assessment for the SQL Server database with connection AdventureWorks SQL Server." This is just a pointer to my SQL Server database in the cloud. "Also include the SSIS files, so ETL, that are actually present on my local directory." The database is in the cloud. SSIS files or ETLs are typically not database objects, but they are typically stored in some directory, and the directory also needs to be provided. I'm asking it to generate an assessment, and the AI eagerly started to run our SnowConvert Assessment skill and try to generate the reports. This is not what I asked.

I said, "Please use the migration skill, and so that you can connect to the SQL database, extract the objects, and then run the overall assessment skill." It actually tells me that it apologizes, which is always find it funny how artificial intelligence apologizes to you. Actually now it picked the right track. Then it started to connect to SQL Server, found a database called AdventureWorks in the SQL Server database, and then it's telling me, "Okay, I've tested the SQL Server connection. I'm going to extract all of the objects from the database. I will add the SSIS files that were in the local directory.

I will convert everything to Snowflake SQL, and then I will run the assessment skill." The assessment skill actually needs SnowConvert to have looked at all of the code and actually convert all of the code before it can provide an assessment. Since all of this is an automatic process, this is really not a lot of effort. Here it's showing me how it's testing the connection, it's extracting the objects, it's adding the SSIS, it's converting the source code, and now it's running the comprehensive assessment. In our case, our comprehensive assessment has a number of reports that are being generated. We have a waves generation, some object exclusion, just in case there are some duplicated objects or something like that in the SSIS files. It's classifying my SSIS project.

It's identifying any dynamic SQL pattern that may happen either in the SSIS or in the stored procedures, because that has to be treated differently. The waves generation is around how do I sequence the migration of my objects to make sure that I can deploy and test them independently. It's running all of the report. It's generated a comprehensive report for me, and the report that is actually generated is this one that we can see here, where it actually migrated 42 objects, 31 tables, six views, three functions, and two ETLs. It's telling me information about all of these reports. It's telling me if there are any missing objects that we can see here and export them. In this case, everything is there. For the SSIS, it tells me that there are 2 packages.

In this case, it's telling me that both packages are data transformation and that are simple. Again, this is just to give you an idea of the level of reports that are generated by our artificial intelligence when running it with the database. Now let me go back here. I can now go to the next step, where I am telling the artificial intelligence to actually go in and continue. Here is the report that we just saw. Then now I'm telling it, okay, now continue the migration and finish it, because I liked the results of the report. It went, and now it's going to actually deploy the objects in Snowflake.

Then it's going to create a target schema, review the objects, and then do the table deployers, the view deployers, the data migration, and then orchestrate all of the hands-off and run the verification. This is what the rest of the skill does. We can see the progress. It says the database already existed, the schema already existed, but then it started to generate all of the deployments and all of the database migration. These are the tables that are being migrated. Then ultimately, it gives me a summary of everything that happened. Here I have that we deployed 31 tables, we migrated data for 31 tables, there are five views that have been deployed. One of them is a SQL Server system view, so this doesn't make sense to be deployed. The AI is intelligent enough to understand that.

It deployed the functions and tested the functions. This is what I have as a demo. Again, SnowConvert AI is available with a user interface, so you can actually decide which phases to run, controlled by a user. We also have a command-line interface that can be invoked and integrated into other migration tools. Like for example, some of our partners are already integrating our CLI within their migration process. It is also exposed now as a skill that can be run from within Cortex Code, and it helps you really drive the complete migration process in just one step. Okay, let's go back to the presentation.

I also wanted to tell you a little bit about a program that we have sponsored by the Snowflake product team from the group that actually builds SnowConvert AI, and in general, all of our migration tools. We have the opportunity to have product engineers engage directly with some customers to help them execute the migration project. If you're interested in having some of our product engineers working side by side with you to help accelerate a migration, just contact me, here is my email, and we can discuss the details on how to make that engagement happen. Okay, Liam, back to you.

Liam Sosinsky
Senior Product Marketing Manager, Migrations, Snowflake

Thanks, Federico. When customers think about migrating to Snowflake, some of the biggest questions are usually, how long is this going to take? How much risk is involved, and when do we actually see value? That's why we created Snowflake LiftOff, which is another approach. LiftOff is a funded acceleration program designed to help you move to Snowflake faster, but importantly, to start recognizing business impact sooner. We combine proven migration methodology, AI-powered tools like SnowConvert AI, and hands-on guidance from Snowflake experts and trusted partners to simplify what's typically a complex process. By the end of a LiftOff engagement, you'll walk away with a clear, prescriptive modernization roadmap, a Snowflake landing zone built on best practices, and a migration prototype that validates the approach using a representative slice of your workloads. It's not just planning, it's proof.

One of our key partners, Spaulding Ridge, is delivering Liftoff workshops. Recently, they worked with a global manufacturing customer to ingest 8 data sources into a new Snowflake landing zone as part of a lighthouse migration initiative. Let's bring in Mick Ramczyk to share more about the engagement and how Liftoff set this customer up for long-term success.

Mick Ramczyk
Snowflake Practice Leader, Spaulding Ridge

Thanks, Liam. As a Snowflake Elite Services partner, this is one of the few new offerings from Snowflake that Spaulding Ridge has taken advantage of and used it as a pilot within some of our customers. We're very excited to partner with Snowflake on it. LiftOff is something that brings value faster to organizations, focusing on that foundation as well as the data migration. Being able to speed that piece up brings value to our customers faster. We're really excited to do that, and it's just the tip of the iceberg for an example of where we've done this. For today's conversation, let's talk about that a little bit more. We want to highlight an engagement where we leveraged the LiftOff framework with a global manufacturing organization. That organization was roughly 4,000 employees and about $10 billion in revenue.

They build systems used by other manufacturers to build all different kinds of products, things from food and beverage to pharmaceuticals, and even electronics. A little bit of the history there. Spaulding Ridge had been working with the CTO of that organization already on a broader data and analytics roadmap with a key outcome that they're looking to achieve around pricing optimization. That would help them better understand and improve the margins across their various product lines. To be able to move forward with that roadmap, this organization wanted to be able to show a very quick win of real benefits on top of the platform that they could expand further in the future. We leveraged the LiftOff framework alongside some of our accelerators to move very quickly through the core phases of this project. LiftOff supported the various provisioning of the environment.

We leveraged SnowConvert to actually transition from a legacy Oracle environment into Snowflake, and then had the standardization of that process. This approach allowed us to modernize the data transformation process efficiently, while reducing a lot of the effort that is typically part of these types of programs, and the risk associated with it. Ultimately, with the help of LiftOff and some of our own capabilities, we delivered a working pricing application built on top of Snowflake in less than two weeks. We're now partnering with Snowflake on the next phases of this program, with the aim of expanding their pricing capabilities even further.

Liam Sosinsky
Senior Product Marketing Manager, Migrations, Snowflake

That's great context, Mick. How did the LiftOff framework specifically help structure the engagement and guide the customer towards a full migration strategy? Now that the foundation is in place, what new use cases has this unlocked for them?

Mick Ramczyk
Snowflake Practice Leader, Spaulding Ridge

I think there's three things I want to talk about when it comes to what we took away from leveraging the LiftOff framework within this organization. First, I wanted to talk about the vision. Being able to use the framework and get immediate alignment with the executives and team at this customer, with the Spaulding Ridge team, gave us clear marching orders, the activities that we were going to be delivering and the sequence of those, to really be prescriptive around that methodology to be used. This made it very seamless, very natural, and eliminated any of the ambiguity that we sometimes see when walking through this with customers. With that elimination of ambiguity, we were able to have the landing zone framework in place. The customer was able to get a production-ready foundation aligned to best practices in Snowflake for security, governance, scalability, et cetera.

The third thing I want to hit on is just the confidence that's built between the customer and the Spaulding Ridge and Snowflake team. Right? We're able to have a very specific approach to how we're going to get to a successful outcome. In this specific use case, we were able to show them improved pricing analytics to drive improvements in the margins of their products. The really exciting part now is that the core architecture is in place. We can use it to deliver all sorts of benefits for this organization as they move forward. We're looking at native applications that we're using Cortex Code to build out. We are developing and implementing some of your traditional business intelligence, leveraging Power BI. Those solutions are aimed at being deployed across the whole organization.

Going forward, not only do they have that roadmap, that vision, but there's also additional use cases that they have been thinking about for a long time that they're going to be able to start to produce results for, things like predictive supply chain analytics. There will be other things that we just can't predict in the future that they're going to want to do. The key is that they now have the infrastructure in place so that those future AI initiatives can leverage the platform that's already been established with Liftoff. As we look and summarize this, in two weeks, we completed a full migration from Oracle onto Snowflake, and we built out a scalable modernization blueprint that the organization can use moving forward on their initiatives.

Liam Sosinsky
Senior Product Marketing Manager, Migrations, Snowflake

Thanks for sharing that, Mick. Through SnowConvert AI, our partners, the LiftOff program, and our forward-deployed engineers, we're continuing to invest in making migrations faster, more predictable, and less expensive. Because modernization shouldn't slow innovation, it should accelerate it. What should you do next? First, you can download SnowConvert AI to assess your environment and understand your migration readiness. Second, the quick start guide will walk you through how to get up and running quickly with step-by-step guidance from our team. Finally, if you'd like to accelerate your migration journey, you can request a LiftOff workshop delivered by one of our expert partners, like Spaulding Ridge, to help you plan and kickstart your migration to Snowflake. Thanks for your time.

Jennifer Wu
Product Marketing Manager, Snowflake

Hi, everyone, and thanks for joining today's session on the next evolution of analytics. My name is Jennifer, and I'm joined today by Sunil Mathew, VP of Data Engineering and Business Intelligence at Franklin Templeton, and Will Xu, Product Manager here at Snowflake. In the age of AI and AI agents, a growing set of use cases is becoming increasingly important, real-time dashboards, data-backed APIs, and high-concurrency workloads. Across all of them, the expectations are the same, sub-second responsiveness at massive scale on near real-time data. Today, Snowflake is great for BI reporting and complex data engineering. To power those low-latency, high-concurrency use cases, Snowflake users relied on exporting that data to alternative systems for a faster caching layer. This introduced real challenges. You get data inconsistency between the systems, you see unpredictable query latencies, and you also increase points of failure with more moving parts.

Lastly, you end up with unpredictable costs at scale. We saw this as a problem, we set out to solve it, and we did. Introducing Snowflake Interactive Analytics, powered by interactive tables and warehouses. With this, you can now serve real-time analytics in sub-seconds at high concurrency and at a great price for performance, all within a single platform with built-in enterprise-grade governance and security. When you look at what this enables, it maps directly to the kinds of use cases we're seeing across customers. With that, let's hear directly from Sunil, who's using Snowflake Interactive Analytics at Franklin Templeton to empower humans and AI to collaborate effectively and drastically improve customer experiences. Hi, Sunil. Thanks again for joining us today. Could you please tell us a little bit about Franklin Templeton and your role?

Sunil Mathew
VP of Data Engineering and Business Intelligence, Franklin Templeton

Sure. I'm a VP of Data Engineering and Business Intelligence at Franklin Templeton.

Franklin Templeton is a global asset management firm that manages $1.6 trillion in assets, serving clients in over 150 different countries with over 75 years of history.

Jennifer Wu
Product Marketing Manager, Snowflake

Given the scale that you guys operate at and the sensitivity of financial data, I can imagine that managing all that data and ensuring governance is a big challenge.

Sunil Mathew
VP of Data Engineering and Business Intelligence, Franklin Templeton

That's actually one of the reasons we have been on Snowflake for the past seven years. It's very easy for our users to use, scalable, and comes with built-in security and governance that allows our team to spend less time managing the infrastructure and more time actually deriving insights and building applications on top of the data.

Jennifer Wu
Product Marketing Manager, Snowflake

We always say that Snowflake should be easy, connected, and trusted, and to hear that confirmation directly from our customers is definitely reassuring. Now let's talk about your call center analytics use case. Could you give us a quick high-level overview, please?

Sunil Mathew
VP of Data Engineering and Business Intelligence, Franklin Templeton

Sure. This use case supports our automated call center application, Genesys, which answers things like balance inquiries when customers call our toll-free numbers. For us, delivering a great customer service experience is a critical thing.

Jennifer Wu
Product Marketing Manager, Snowflake

For sure. I can imagine with the amount of customers you guys serve globally, as you mentioned earlier, 150+ countries, concurrency must also be an important SLA factor.

Sunil Mathew
VP of Data Engineering and Business Intelligence, Franklin Templeton

Exactly. Latency and concurrency and data freshness are the main challenges we were trying to solve here. We initially tried using standard Snowflake tables, but we couldn't consistently achieve the performance that we needed. We also explored exporting data into other systems to improve speed and scale, but that introduced a lot of additional complexity, like creating new pipelines, duplicating the data, and additional security considerations due to the sensitive nature of our client data. At a certain point, it just became too much operational overhead.

Jennifer Wu
Product Marketing Manager, Snowflake

Is that when you guys decided to look at Snowflake interactive tables and warehouses?

Sunil Mathew
VP of Data Engineering and Business Intelligence, Franklin Templeton

Yes. When we heard about Snowflake interactive tables and interactive warehouses, we wanted to try it out. Since our data was already in Snowflake, the transition was actually very straightforward. We moved the data into an interactive table and ran it on an interactive warehouse, and within a week, we finished the POC and development, and into production around four weeks. No more operational nightmare, and data freshness is no longer an issue. Best of both worlds.

Jennifer Wu
Product Marketing Manager, Snowflake

That's so great to hear that Snowflake Interactive Analytics was able to solve the performance challenges that you and your team faced. I'm kind of curious, beyond performance, what else made Snowflake Interactive the right fit for you guys?

Sunil Mathew
VP of Data Engineering and Business Intelligence, Franklin Templeton

Absolutely. A big factor was simplicity. Going forward, any application that is strict latency requirements, we would like to use interactive warehouse. Along with that, when look at the data, our all data is sitting on a Snowflake platform, and making it available right away there for any applications reduce all load of operational overhead and cost obligations. Because our call center operates globally, this application needs to be available 24/7, and I'm getting the reliability of the Snowflake platform. When we saw that interactive warehouse would deliver this performance while also being more cost efficient, it's like a 40% cheaper than the other warehouses. We saw meaningful cost savings as well. Overall, we reduced architectural complexity, improved performance, freshness of the data, and lowered cost.

Jennifer Wu
Product Marketing Manager, Snowflake

Wow, that's actually really amazing to hear that you were able to get the best of both worlds, right? Previous solutions that you've tried, you kind of just got one or the other, right? Whether it's the latency or the concurrency or maybe cost. With Snowflake Interactive Analytics, you were able to get everything, so you didn't have to choose anymore. That's really amazing to hear. Again, thank you so much for taking time out of your day to share your story with us.

Sunil Mathew
VP of Data Engineering and Business Intelligence, Franklin Templeton

Absolutely. My pleasure. From the beginning, I had great support from the Snowflake team, and that's why I'm more than happy to share my thoughts on this feature on the Snowflake.

Jennifer Wu
Product Marketing Manager, Snowflake

Wonderful. All right. Now that we've heard from a customer and the benefits they saw, you might be wondering what's actually happening behind the scenes to make this all possible. To walk us through that, I'd like to hand it over to Will Xu, our Product Manager for interactive tables and warehouses.

Will Xu
Principal Product Manager, Snowflake

Thanks, Jen. Very nice to meet you all. The new solution we have built here is a pair of features that really accelerates the interactive workloads. As you have seen earlier in today's session, when there's a lot of users or agents, either humans or AIs, are accessing the dashboard in a very highly concurrent fashion, we're able to provide the kind of performance they need. In this case, we're introducing a new format, the interactive tables. The tables are slightly bigger, but it has additional metadata index in it. A new thing we're introducing is an interactive warehouse. It's the new compute layer. This compute layer will understand how to leverage the additional metadata in the table to accelerate queries you give it, able to achieve substantially higher performance and throughput compared to standard Snowflake for those kind of very highly concurrent and simple workload.

What's generally available today to you is the ability to load data from any kind of batch sources, whether they are S3 or other Snowflake tables or Iceberg, and you can synchronize those data sources in batch into interactive table and serve them. The other source that we're currently in private preview is the ability to load data from a streaming source, such as Kafka or Kinesis. The cool thing about the streaming source is you're able to ingest data and serve them under one second of latency. With that, you can combine multiple data sources together in a single solution and serve them effectively on the same platform. In comparison to Snowflake gen one, which is the current standard warehouses, Interactive is able to provide nine times higher concurrency.

That means to serve 200 users on interactive, you only need one warehouse, where with standard, you might end up with 10 or 20 warehouses. The other thing that's new with interactive is it's three times lower on query latency. That means individual queries you're running on interactive is substantially faster, and the dashboard or APIs are way more responsive compared to the previous generations. What's even more important is we're able to serve this great level of performance with 40% lower cost compared to standard Snowflake. This new warehouse and table format is not only faster but also cheaper. With this, I'd like to end today's session to say everything is available from today's session. You can scan those QR code or follow the links below to start viewing today.

All the documentations are very compatible with all the latest generation AI coding agents and tools, and you're welcome to get started and try them out today.

Nick El-Rayess
Senior Product Marketing Manager, Analytics, Snowflake

Welcome everyone to Analytics Connect. I'm Nick El-Rayess, a senior PMM for AI-powered BI, and we are thrilled for you to be joining us from all over the globe today. Over the next 25 or so minutes, we're diving into the heart of the modern data architecture, the semantic layer. We'll explore Snowflake semantic views, our Semantic View Autopilot, and the Open Semantic Interchange, or OSI, the new standard that bridges your traditional BI dashboards and your next-generation AI agents. We have a packed agenda, including a live demo, a great conversation with SoFi, and an exciting fireside chat with Sofia and Josh Klahr. Let's get right to it. I'm excited to hand things over to Josh Klahr. Over to you, Josh.

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

Thank you, Nick. Good to see you. Hey, everybody, I'm Josh Klahr. I'm on the product management team here at Snowflake. I've worked in a bunch of different areas close to the Snowflake database. That includes our query language, data types, performance optimization features. For the past year or so, I have been spending almost all of my time on Snowflake semantic views. I'm super excited about this feature area. A lot of our customers are, but I thought maybe to share my excitement, it might make sense to talk about kind of the what and why of semantic views in Snowflake. If you kind of go back maybe 12, 18 months ago, one of the things that we were hearing from customers is that they really wanted to figure out how do they unlock kind of a text-to-SQL experience? How do they start talking to their data?

I think as the LLMs were kind of coming out, this was really kind of an exciting area. One of the things that became very clear, kind of we realized this in our early customer engagements, but it was clear in the market as well, is that if you set an LLM loose and you point it to a schema, and you say, "Answer this question," it's not going to do a great job. What the LLMs really needed was kind of a clear roadmap that said, "These are the right tables to query. This is the right calculation to run. If the user says these five synonyms, they all refer to this particular metric or dimension." Semantic views in Snowflake were born as a solution to that problem. That's when we started engaging actually with early customers like SoFi.

The team there was, I think, pretty progressive in thinking through kind of how do we go from our traditional BI experiences that are powered by Tableau, but at the same time, unlock a conversational experience with our data. Kind of that semantic model imperative was born.

The other thing that we heard from these customers, SoFi as an example, was, "I'm excited about this idea of unlocking talk-to-my-data experiences, AI-powered experiences on my data, but I'm also a little bit worried because the thing that I don't want is for my CFO to ask a question of an agent, and then go look at a dashboard and get a different answer." With Snowflake semantic views, one of the early design decisions that we made, and I think is paying off, is Snowflake semantic views are both metadata, rich set of metadata that works for AI-powered analytics, agentic analytics, and text-to-SQL. The Snowflake semantic view interface also has a declarative layer.

You can say, "I want to select metrics and dimensions from semantic view." It is deterministic, it's always compiled, and it's guaranteed to give you the same result every time. Because we have this interface, this means that traditional tools that generate SQL, a BI tool like a Tableau, Streamlit app, as well as your AI-powered experience, can talk to exactly the same semantics. You're reducing this risk of kind of different people getting different answers. One of the kind of scenarios I talk about with customers is, hey, if you have two BI tools, then you have two chances of someone getting a different answer because the semantics are defined twice. When you have 1,000 people talking to agents, you have 1,000 different ways that they could get that answer back.

Having the semantics that is shared across AI and BI ends up being super critical. Fast-forward to today, I think the dialogue has changed from us saying, "Hey, customer, we think you may need a semantic layer for your AI" to everybody's coming to us and saying, "Hey, how do I get up and running quickly? How do I productionize my semantic assets?" We're making a huge investment in thinking through kind of what is the tooling we can provide to help customers go from zero to semantic view in as short a period as possible. We'll show some demos of this I think later today with Semantic View Autopilot.

One final thing that I'll touch on that I think we're also going to kind of talk about in the fireside chat is, because semantics are becoming so strategic, one of the other things we've really been investing in with the ecosystem is how do we have an open definition for semantic models? This is what the Open Semantic Interchange, or OSI, set out to do. This is an open-source consortium of a bunch of industry players that have come together and said, "Hey, we know semantics are important, but we also know customers don't want to be locked in. They don't want to have a proprietary language for their semantic models." OSI is really an intent.

or an attempt to kind of reduce that concern about lock-in or vendor specificity and really provide a standard that allows you to create your semantics once and consume them any way. I think that's. We're early in that journey, but our goal overall at Snowflake is how do we help customers get up and running, have the right semantics for AI and BI, and do so in a way that it allows them to feel comfortable kind of making this a strategic asset in their development platform. Okay, it's one thing to talk about this unified vision for BI and AI theoretically, but I want to share with folks what this looks like in practice and at massive scale. To help me with that, I'm thrilled to welcome Sreekanth Pendli and Krishna Pala.

Sreekanth Pendli
Senior Engineering Manager, SoFi Data Platform, SoFi

Thanks for having us.

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

You bet. It feels like we're back in one of our weekly check-ins, which is great. SoFi has really been an incredible partner. You've pushed the boundaries of what's possible with data, I think with Snowflake overall, and specifically kind of your early engagement with semantic views. Sreekanth, maybe you can take us back to, I think it was late 2024 when we started talking about kind of semantic views. You were in the market looking for a semantic layer. I'm curious, what were the challenges you were facing, and why do you think it was critical that you had this logic, this kind of semantic context, living natively within Snowflake versus a third-party silo?

Sreekanth Pendli
Senior Engineering Manager, SoFi Data Platform, SoFi

Yeah. T.anks again, Josh, for having us. Those who don't know what is SoFi, we are a fintech company, a member-centric, one-stop shop for digital financial services that helps members borrow, save, spend, invest, and protect their money. Being a one-stop shop means our member needs are constantly changing, so we move very fast. We hit a bottleneck that I think a lot of data teams in the industry also face. Our AI agents and BI tools and other downstreams are not speaking the same language. We desperately needed a single source of truth. When we say single source, we really mean a single source, because it's a nightmare when a sales leader asks an AI agent for a forecast and gets one number, and the analyst pulls a BI report and gives them another number. It's the same data, but different languages and different outcomes.

When we evaluated third-party tools, we realized that keeping the semantic logic outside the platform is going to introduce additional latency, scaling challenges, and governance risks. We didn't want another silo as we already have enough. That's why we partnered with our team, with your team, actually, to co-develop semantic views directly within Snowflake, and it has been an incredible value add for SoFi and a great partnership. Thank you.

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

You bet. No, but thank you. It's awesome to kind of see the progress and I remember those early design discussions and partnerships really well, and it felt like co-innovation to make sure we had the right features to make you successful. I remember adding some of the things that are core to the semantic view feature today, whether that's in a multi-path relationships, our deep support for Tableau, type two slowly changing dimensions, data lineage. All of these things were, I think, influenced by the direction that you helped us set for semantic views. We know that building this kind of foundation takes a lot of work, and I've seen little pieces of it through our work together. I'm curious, Krishna, maybe how has deploying semantic views in particular helped you to transform the SoFi data platform?

Krishna Pala
Engineering Manager, SoFi Data Platform, SoFi

Yeah. First of all, thank you so much for your partnership. We appreciate it very much. Coming to semantic views, they have completely transformed our architecture. By embedding semantic views into SoFi data platform, we essentially created a unified, robust, and governed backbone that is helping us with all the data consumption in SoFi. I think in my mind, the biggest win is that it serves two completely different workloads. On one side, it has turbocharged our sense of BI analytics, enabling our data scientists or analysts to pull data with total confidence that the metrics are standardized and accurate. On the other side, it gave us a strong, trusted foundation that is helping us to safely deploy and roll out our gen AI features and platform capabilities that we are building at SoFi.

For example, whether an AI agent is answering a complex query or an executive or a business leader looking at a KPI on a dashboard, we know for sure that it is coming from a governed exact same semantic model, helping us further with the outcome of single source of truth.

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

That's awesome. This is something that I hear really from a lot of customers I talk to, this idea of how do we make sure we have consistency before intelligence? The last thing that you want is to start rolling out AI capabilities and have this experience of a user getting an answer that feels wrong or doesn't match what they already understand to be true. Having that kind of governed foundation to make AI work in the real world is super important. Srikanth, Krishna, I really appreciate kind of the overall partnership. I think it's been a great collaboration, and thanks for joining me here today.

If folks have appreciated this discussion and want to learn a little bit more, I know we only got to talk for a few minutes, you can see our session at Summit, where SoFi is going to be talking in more depth about their experience with semantic views. Make sure you join us there. Once again, thanks gentlemen for joining me.

Sreekanth Pendli
Senior Engineering Manager, SoFi Data Platform, SoFi

Yeah. See you others in Summit. Thank you.

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

Awesome. I think that was a great summary of our partnership with SoFi and the success they're having with semantic views and AI and BI in Snowflake. One of the things we're learning as we're working with customers like SoFi is that building these models manually can be really time-consuming, and then maintaining them is also a huge amount of work as your business changes. You heard SoFi talk about how fast their business moves, and they need to be responsive. To address that, we've invested and launched a new feature in Snowflake called Semantic View Autopilot. Semantic View Autopilot helps turbocharge the creation of semantics in Snowflake, and then also uses agentic AI to support the ongoing evolution of these semantics within Snowflake. We think it's a great feature. We're getting great traction, great feedback from customers.

Next, we're going to dive in and actually do a live demo of Semantic View Autopilot. Okay, we've talked a lot, Nick, about Semantic Views, and I think hopefully are doing a good job of sharing how important we think they are to unlocking AI-powered BI. One of the things that our customers are asking is, "Yeah, I like the Semantic View idea. I want to get started. How do I do it as quickly as possible?" Today we're going to walk through one of our recently launched features called Semantic View Autopilot. What Semantic View Autopilot does is it essentially allows you to go from zero to talking to your data. In this demo, let's hope it's in about three minutes. I'll just give you a quick overview of what Semantic View Autopilot does.

It's intent is, how do I take the context that I already have in my business? If you think about your best data scientists, maybe you have Streamlit apps that you've built that are generating SQL queries. Those SQL queries contain a bunch of context that essentially say, "This is the way I want to analyze my business. These are the tables I want to join, the aggregates I want to run, the group bys I want to do." Those same semantics sit in your Tableau dashboards. If you open up a Tableau dashboard, you're going to see a description of the dashboard, as well as a bunch of information that says, "These are the tables, metrics, and dimensions." The same thing exists in Power BI.

What Semantic View Autopilot does is it really provides this way to harvest the existing context that exists, the semantic context that exists in your organization, and turn it into a semantic view as quickly as possible. To see this in action, let's start by choosing Power BI. I happen to have on my desktop a Power BI workbook, a .pbit file. This essentially packages up all of the logic that typically sits within a Power BI model, tables, relationships, metrics, dimensions. I'm going to add some sample values. I'm going to add some AI-powered descriptions out of the box, and click Next. Let's give this one a name and call this Analytics Connect. You can see this as if you're a Power BI person, you can look at the schema here. This is actually going against everybody's favorite Microsoft schema, AdventureWorks.

When I click that button, what we're doing is we're ingesting that PBIT file. We're parsing out all of the table descriptions. We're parsing out field names. We're finding relationships. We're figuring out what are the things that look like metrics, what are the things that look like dimensions. We're parsing, again, if you're familiar with Power BI, we're parsing DAX queries, and trying to turn those into expressions that are generated in SQL. When this is done running, what I'm going to have is a semantic view that is ready for me to query in Snowflake Intelligence. You can see here I am in Snowflake Intelligence. It's generating my semantic view, so it's adding things like descriptions. It's adding sample values.

If I scroll through here, you'll see that it has parsed out a bunch of those tables that exist in the AdventureWorks schema, my association table, my sales table. This one has a bunch of metrics that are attached to it. I've got, again, these came from DAX, but they're now in Snowflake semantic view SQL. I have a full model here that was ingested based on that Power BI context. All of my relationships exist, and now I can start asking questions like, "Show me the top 10 product categories by sales volume." If you think about what's pretty magical here, I had no semantic view three minutes ago. Now I have those semantics persisted in Snowflake.

I can write a declarative query against this using our SQL language, but I can also use AI and either Snowflake Intelligence or Cortex Analyst to ask natural language questions about my semantic view. You can also see here that some descriptions have been added here. In the background, our Cortex AI has been generating customer descriptions. Let's open up and see if we have a metric here that we can look at. This contains the records of our calendar date. I've got a bunch of metrics that have been added. I think some descriptions have been added to these metrics. If you think about all of this rich context, all of this has happened in a matter of minutes, with Snowflake Semantic View Autopilot. We think this is super compelling for our customers. I think we've talked about this a lot.

How do we get up and running leveraging semantics as fast as possible? Semantic View Autopilot is one of the critical capabilities in making that happen. I'll also mention everything you see here in the UI is also available as SQL in Cortex Code. Whether you are working at command line or CLI or whether you like to use the UI, kind of those same capabilities are built natively into the product. Okay, Nick, hopefully that gave you a good sense of what we're up to with Semantic View Autopilot.

Nick El-Rayess
Senior Product Marketing Manager, Analytics, Snowflake

Josh, that was a great demo. Thanks for walking us through that. Next, we're shifting gears into a fireside chat on the future of semantics. I'm very excited to welcome Sofia, a Snowflake Data Superhero, alongside Josh, to dive deeper into where this is all heading.

Sofia Pierini
Senior Data Engineer, EY

Hi, everyone. Thank you, Josh. Thank you, Nick. Nice to meet you. I'm Sofia Pierini. I'm a senior data engineer at EY and a Snowflake Data Superhero, but proudly the founder and the chapter leader of the Snowflake Italy User Group. Last November, I was at the Snowflake Silicon Valley Hub for the first Snowflake Data Superhero Council, and one of the most interesting sessions I attended there was actually with you, Josh, and you, Nick. We started a conversation around the evolution of the semantic layer, and I'd love to continue that here because it feels like things are really moving quickly in this space. Josh, one of the key themes we touched on was this gap between defining data and actually making it usable by all the users.

Today, consumption seems to me that it's evolving, and it doesn't just mean, for example, dashboards, but it also means AI agents, copilot applications, and more interactive use cases. From your perspective, how does the semantic layer support this shift, and where does OSI fit into Snowflake roadmap when it comes to enabling a fully connected ecosystem across BI and AI?

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

Yeah. Great question, Sofia. Good to see you again. I wish we had a fire for our fireside chat, but I guess we'll just get the blue screens. Yeah, I think it's been a super exciting time in the world of the traditional semantic layer. I'm an old school business intelligence person, so I've always believed in this idea of having a layer that describes core business concepts, metrics, dimensions. What we're seeing with the explosion of AI and text-to-SQL and agentic analytics is really that semantics are more important than ever. A lot of the customers I talk to are, I think, dealing with going from being BI teams or data engineering teams being tasked with unlocking an AI-powered experience.

What they're looking for is a way to define semantics once, so that it can be used by maybe their BI tools, but also their AI tools as well. First concept that we're hearing is this idea to really, I want to define once and use my semantics in multiple channels, whether that's an app that I'm building, whether it's my traditional business intelligence tool like a Tableau or a ThoughtSpot, or whether it's an AI-powered experience. Accessibility ends up being super important and openness. Customers I talk to are really worried about, "Hey, I have this strategic asset. How do I create it once, use it many times?" And also think about portability, think about future-proofing my investment.

Our goal with OSI or Open Semantic Interchange is really how do we provide a vendor-neutral approach that allows customers to achieve that kind of define once, use many, kind of concept, as well as have the interoperability across the ecosystem.

Sofia Pierini
Senior Data Engineer, EY

That's super cool. Another shift that I'm seeing is that the idea of having a single centralized semantic model is really starting to break down, and probably organizations are really too complex for that. Indeed, now enterprise teams look for the idea of having something more modular building blocks that we can reuse, but they can also evolve independently, but still stay very governed by, for all the interoperability tools that we can use and same for the governance. My question is how is Snowflake approaching this need for flexibility, but while still maintaining consistencies and governance? I'm really excited to hear that.

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

Yeah. I bet you are. Yeah, so it's a hard problem, and I think you're right. Maybe the early days of business intelligence there was, I think, initially this move to kind of govern semantic models, a single team would create these monolithic models that modeled the business. I think the self-service BI revolution changed that, and it was clear that you needed to have the domain expertise and flexibility be in the hands of the people that know that part of the business. We're seeing the same thing with semantic views. There are a few design patterns we see showing up. I actually think this is emerging. I think the jury is maybe still out on what the design patterns are, but one thing that is clear is that customers want this idea of composability.

This might mean, having domain teams that model specific parts of the business, and they're experts on finance or supply chain or customer support. You want your agent to be able to look across that domain, and so you need to be able to compose a bunch of smaller models into a larger enterprise view. We also see very traditional use cases like conformed dimensions. I want to model my account dimension once, but I want to reference it in all of those different analytic domains. This is really a big investment area for us is how do we deliver not just the openness, and the design once, use many concept with the semantic views, but how do we have composability and reusability so that you're not having each team reinvent the wheel.

I would say it's evolving, but it is, I think in customer conversations, the number one request that I'm getting is how do we have this idea of composability within our semantic modeling estate?

Sofia Pierini
Senior Data Engineer, EY

Yeah. Totally agree with you. Since November, I guess that's the direction and we can sometimes also be moving beyond what is the semantic intent in a traditional way. It seems to me that now we could actually start talking about really business ontology. Very huge words, but for me, this just means that we are not just defining metrics, but we are just really capturing relationships and meaning across all the organization and every single ecosystem and data of the asset of the organization. It also, for me, kind of the consequence of what now AI really needs. AI doesn't just really need data, but it seems to me that it needs understanding. How do you see Snowflake evolving in that direction, and what does that enable when it comes to AI business case or use cases?

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

Yeah, I think maybe I misspoke if I said the number one customer conversation that I have is around composability. I think maybe the number one customer conversation I have is around ontology. I think because we're talking about the semantics of the business, and often people think about ontologies as a representation of a business model, I think we see this convergence of questions around, is a semantic layer, is it an ontology, is it not? I tend to try to steer those conversations to not do you need an ontology or not, but what are the things that you need to solve for?

In these discussions, what customers need and what kind of the emerging agentic analytics landscape needs is a really clear context layer that describes not just the semantics of a domain, what are the metrics and dimensions, but also the relationships between those domains. How do I navigate from sales to customer consumption to product usage to customer support? What are the rules of that navigation? So how do you define constraints that a return can only happen if a customer made a purchase? Or how do you capture the unwritten rules of the business that say, in business unit A, a high-value customer is $5,000 and above, and business unit B, it's $100,000 and above. All of these things end up being really, really important context for agentic analytics.

What we're seeing is that you can use the concepts of an ontology, which is kind of thing, verb, thing, relationship. You can use those concepts to start describing the roadmap that an agent might need to navigate through solving a complex analytical problem. What I'm very excited about is figuring out what is the new set of structures and contexts that are required, not just to ask a text-to-SQL question and give me a metric back, but to ask a business question and represent the rules and the ethos of that business in the way that the agent thinks about the data. This is really, I think, again, another really quickly evolving space, but it's an area where we're investing a lot.

We're working with a lot of early customers on kind of exactly what does the optimal data structure look like for this kind of agent-driven analytics.

Sofia Pierini
Senior Data Engineer, EY

Yeah. That's really exciting to hear. Maybe zooming out, but I hope that you agree with me, but it seems to me that we are really. For years, companies have been investing in data platform, but now it seems to me that the question is not what can I do with the data, but the question is, Are data or is data a way to extract intelligence from the platforms that people are using and companies are using? My question for you is, because you are very expert data person, what's the biggest shift that you're seeing, and especially with Snowflake, what are Snowflake customers seeing when it comes to semantic layer over the next 12 or 18 months? Because it seems to me that this question is really urgent to answer.

It seems to me that at the end of the day, people really want to build, they want to change moving from build data infrastructure and to finally, let's say, build data understanding. I really want to hear your point of view and your answer.

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

Yeah. It's funny. I'd say the velocity and the pace of change has been kind of breathtaking. I would say two years ago, people thought about semantic layers, and they thought about BI. 12 months ago, the conversation started, which is, "Hey, I'm trying to do this AI thing. It's pretty clear that I need a semantic layer for text-to-SQL. Should I start investing now, or how do I get started?" Now it feels really like a race. Every customer conversation I'm having is, I know that it's strategic for me to build this semantic data foundation. How quickly can I get there? I think you're right. You really touched on something there. The conversations have gone away from what is the technical platform feature you have for data engineering task A or B, or what data catalog formats do you support?

It's really, I know that it's strategically imperative for me to have data understanding, and this context layer, this semantic layer is kind of the huge unlock. It really is driving a bunch of conversations. I think the investment that we're making across the platform is helping customers figure out how quickly can I get high-quality context and semantics. If you look at Semantic View Autopilot, which we demoed, if you look at what we're doing with our skills and Cortex Code, Cortex Code now comes bundled with a whole host of skills around how do I mine my semantic model from my SQL queries? How do I improve it by adding verified queries? How do I maintain it over time? How do I do high-quality evals?

All of these things are becoming platform-native components that are helping our customers go from, like I said. It's interesting. Maybe I need one, and now it's like, how quickly can I get up and running on a kind of enterprise-grade context layer? I think it's a super exciting time, and we're seeing great traction with customers. We're hoping that this really helps them unlock what their target outcome is, which is how do I get that. I'm going to steal this term from you, but this understanding, this kind of enterprise understanding of their data, I think that's really critical.

Sofia Pierini
Senior Data Engineer, EY

Yeah, that's really an exciting roadmap. For me, it's something that's really exciting to hear because it seems to me that before, the semantic layer was really only on the top of the data, but now it's really becoming, and it's undeniable, the foundation of how our organization thinks, but also decide and can build AI for tomorrow. Thank you. Thank you, Josh, for the conversation.

Josh Klahr
Head of Product Management for AI-Powered BI, Snowflake

Thanks, Sofia. It was great having you.

Nick El-Rayess
Senior Product Marketing Manager, Analytics, Snowflake

Thank you, Sofia and Josh. That was a really exciting fireside chat. A big thank you to everyone watching for joining us at Analytics Connect. Don't forget to check out our Analytics BI school for those deep dive tutorials. Don't leave just yet. We still have a lot to share with you in our next sessions. Stay tuned.

Jennifer Wu
Product Marketing Manager, Snowflake

Hi, everyone, and welcome. In this session, you'll see the latest geospatial capabilities now available in Snowflake. More importantly, how they translate into real business impact. My name is Jennifer, and I'm a product marketing manager at Snowflake, focused on geospatial analytics. Joining me today, we have Andrei from San Diego Airport, who will share how his team is using geospatial analytics in Snowflake. Becky, one of our solution architects, will walk us through a delivery routing solution powered by Cortex Code. Most organizations already have spatial data. Essentially, it's any data tied to location, like customer store addresses, IoT sensors, or delivery routes. In fact, over 85% of Snowflake customers are already storing this type of data today. What we typically hear from our customers, though, is that extracting insights from this data can be challenging.

Traditionally, geospatial analytics required separate tools and workflows, making it difficult to combine with your core business data and generate insights. With Snowflake, that barrier goes away. Since 2021, Snowflake has been building native geospatial capabilities, so spatial data becomes more than just coordinates. It becomes actionable insight that directly influences business decisions. Now, with our latest investments in AI, we're taking this even further. With Cortex Code, teams can accelerate the implementation of geospatial solutions. With Snowflake Intelligence and semantic views, business users can ask questions in plain language and get insights without needing to rely on data teams. Across industries, geospatial analytics shows up in very practical business decisions, whether it's seeing where cellular coverage is weak, to seeing delivery times and routes. These are all business questions that inherently relies on location data. All right.

Now let's jump into everyone's favorite part, hearing directly from our customers. Andre, thanks again so much for joining us today. Could you please give our viewers a quick introduction of your role and what makes geospatial data so crucial for airports?

Andre Bruckner
Business Intelligence Director, San Diego County Regional Airport Authority

Sure, yeah. I have the immense pleasure and privilege of serving as the BI Director, Business Intelligence Director, at San Diego International Airport, where my team, we try to help the organization make better decisions by turning a lot of the data that we have, and so much of that is geospatial data, into actionable insights. Now, one of the things that makes our work at an airport really special and super interesting is the fact that an airport is not just a single business, but really it's an ecosystem of many interdependent businesses and public sector functions. Think federal agencies, local agencies, state agencies that all operate together in what everybody who's watching the news these days can see is a very, very dynamic environment.

We're simultaneously managing parking operations, ground transport operations, rental cars, concessions, airport construction, airline operations, passenger movement, and then that coordination with our partners. Each one of those areas will have its own priorities and its own data. One of the things that was really important for us is to have a platform where we can connect all of those things and then integrate that geospatial understanding of that data as well, because all of those things, like passenger volume processing through a gate, has a very distinct geospatial component that then relates with similar geospatial data, like the movement of aircraft, arrivals, delays, all of those other things.

The other thing that makes it really interesting here at SAN is that in comparison to some of our larger cousins up north, we have about 200,000 aircraft operations annually, actually more than 200,000 aircraft operations annually. Last year, we served about 25.3 million passengers. Just being really efficient and effective, obviously, is super important for us.

Jennifer Wu
Product Marketing Manager, Snowflake

Yeah, for sure. I can imagine how complex things are, right? If weather, fog or something happens, that kind of has a ripple effect, right? It's also really interesting for someone like myself who travels quite frequently. I never realized how much actually goes on behind the scenes, and so I can definitely see why geospatial data is so important. Breaking down the silos between all those data sets you have into a consolidated view is so invaluable.

Andre Bruckner
Business Intelligence Director, San Diego County Regional Airport Authority

Mm-hmm.

Jennifer Wu
Product Marketing Manager, Snowflake

With that, do you mind diving a little bit deeper into how San Diego Airport is using Snowflake's geospatial capabilities today?

Andre Bruckner
Business Intelligence Director, San Diego County Regional Airport Authority

Yeah, sure. We started using Snowflake August of 2024, I think after we did an evaluation process. One of the things that was really essential for us is that we had a good solution that allowed us to do that geospatial analytics. One of the things I can show a little bit later, actually was a question about runway crossings that came up from our operations team quite frequently. Anybody who's ever had a loved one arrive at an airport and wanted to know where they're at probably is familiar with solutions like Flightradar24 or ADSBExchange, or one of those aggregators. Well, from those, we knew that we could get a lot of this positional data on aircraft operations.

We wanted to have a solution that then allowed us to really integrate that and answer a lot of the questions that our operation systems maybe weren't able to answer out of the box by doing that internally, similarly aggregated with some of the passenger-related information. We have a common use system that gives us boarding gate scans for individual flight operations. We have a system called Xovis that allows us to manage our wait times at the TSA checkpoints. We have lots of data on the concession sales that we're recording here. Connecting all of those data sets in a meaningful way just unlocks much more spatial and operational context, and it gives a much better understanding of the whole airport ecosystem, what we can do to make it operate more efficiently.

Jennifer Wu
Product Marketing Manager, Snowflake

I was also told that San Diego Airport is one of our first customers using our newly built solution for runway congestion operations. Could you tell us a little bit more about that one?

Andre Bruckner
Business Intelligence Director, San Diego County Regional Airport Authority

Yeah. I may have mentioned this before. One of the things that we were initially very interested in is the number of runway crossings. As I mentioned, congestion times, obviously not the greatest thing. What's even worse during a congestion or during one of those blocks of time where we are congested is if you end up with an aircraft that, say, has to cross the runway, and you can kind of see it on the diagram that you have on the screen there. We have a taxiway to the south of the runway and a taxiway to the north of the runway. If you end up with a lot of runway crossings during a very busy period of time throughout the day, obviously, you don't want to have an aircraft take off or try to land while another aircraft is crossing the runway. That's not safe.

Jennifer Wu
Product Marketing Manager, Snowflake

Mm-hmm.

Andre Bruckner
Business Intelligence Director, San Diego County Regional Airport Authority

Our operations team was very interested in having information on that, and we were very quickly able to, in working together with Alexey from Snowflake, build a solution that just shows us for a particular date and for various times throughout the day, how many runway crossings actually took place, which was a great win because, as I said, the current air operations database system that they have in place didn't really provide that for them. One of the interesting things about SAN, in addition to the fact, as you can see, we only have one runway here. With this solution already, when I mouse over this runway for yesterday, I can immediately see we had this many arrivals, this many departures, and how that relates to how we had scheduled things.

Then also getting some additional information in terms of our throughput capacity, which is really, really cool. The thing I want to show here is actually that this is a flight tracker solution. Being located right in the heart of San Diego, as you can see here's our beautiful airport. Here's the gorgeous, the best city of America, San Diego. We also have to be very cognizant of how we operate with respect to the people that live here. You have the community of Point Loma here. We try to, as much as possible, to be a good neighbor. The polygon that you see defined here is actually, I think, something that's called a noise abatement corridor. The idea behind that is that we don't want aircraft to turn before they exit that corridor.

I'm not an expert, but from what I understand, the wings actually act as a little bit of a noise shield. If the aircraft turns while it is on max thrust, it just creates all kinds of extra noise, which the people that live in these communities, like in Ocean Beach and Point Loma here, would probably not appreciate very much. One of the things that we can do with the solution, I've got a flight that I've selected here, and we can just very quickly see, okay, no problems with this aircraft operation here. They did not turn before that. Now, that's obviously on the visual side. Similarly, I love being able to draw pictures of things because it helps me understand things better.

The really cool thing about this, though, is that we can take these millions, hundreds of millions, billions of data points that we accumulate in a year on all of these airport operations, and then run very quick reports in order to see whether or not you had one of those vector changes that took place inside the confines of this corridor. I love this because, again, it makes us a better partner. One of the reasons why I switched into the airport space was because I wanted to do something. I don't know. I'm 6 foot 3, so air travel isn't the most fun thing for me. Most aircraft aren't designed for people of my height.

I thought, "Hey, maybe I can do some things that make the whole air travel a little bit nicer." You see here you've got the flight profile of this flight operation over time. Obviously, a departure. They took off very nicely, but you can track this information. You can also see that this aircraft was very nice. It stuck very close to our curfew. Between 6:30 A.M. and 6:45 A.M., they took off. No problems there. Other information. We have information on potential operations that took place outside of the 11:30 P.M. to 6:30 A.M. curfew timeframe. Just in terms of being a better neighbor, this is super cool.

If I wanted to get a little bit more information on how things are taking place here at the airport, maybe I want to zoom in, and we've just built a beautiful new terminal one. I highly recommend anybody who watches this to come here and enjoy the facilities there. I can mouse over on this here, and I can see that for gate 104 on 3/25, we have 8 scheduled flights with 1,066 passengers. I even get the hourly and plane passenger profile here in this neat little tooltip, and I can do more analytics on that if I wanted to. If I want to zoom out and take a look at our parking plaza here, I can see all of the relevant revenue information for that.

It's just such a nice thing to be able to have this one visual of our airport and be able to tie things together. We're building things out. One of our next opportunities is to put the concession sales in, or the concessionaire locations that are located on this end of the terminal in there as well. From there, it becomes a very simple exercise to just create these geo fences that then capture the actual number of passengers that boarded all of these gates in order to create an actual, real concession sales per relevant passenger. Because beforehand, what we were left with is that we would just end up taking the TSA throughput for the entire terminal. Which, if you're a passenger that leaves out of a gate somewhere over here, they're not currently active, but let's pretend like they are.

Well, we shouldn't really count that passenger volume against some concessionaire's performance over here, because that person is probably not going to make their way over there. That is super cool already, as far as I'm concerned, at least. The other thing that's relevant and important for us to understand is how our gates are being utilized. I mentioned earlier that our runway, in my view of the world at least, is kind of our assembly line. The gates, obviously, you can't have passengers getting on and off an aircraft very nicely, at least, unless you have the gates there. We really want to understand how those are being utilized as well.

Using that same geospatial data that I was talking about a minute ago, we can then take all of those latitude, longitude coordinates, and since we know the geo-fences that make up the gate locations that we have, we can then figure out that for gate 103, for the date range selected, and I'll just pick one date here to make it a little bit more easy to look at. For gate 103, we had this airline here use that gate for about 535 minutes. They were able to process 9 flights out of there at an average time per flight of however many minutes, 59.4 minutes that we're looking at here.

If we're talking about being able to better understand the efficient use of our resources, I cannot imagine any data that would be more interesting and relevant in order to make that sort of thing happen. That's another really nice thing. We also, because we are, I guess you could call us a little bit of a landlord in a way. We charge our airline partners for the use of our facilities because we have to recover costs. Well, in order to make that happen, the billing agreements are on the basis of the number of operations or the number of times that an airline turns an operation here at one of our gates or has a remain overnight, which just means that the aircraft stays here and hangs out until the next morning.

Again, if you have all of this geospatial data on those aircraft, you know when they entered your runway in order to depart or when they entered your runway in order to arrive, which allows you to very easily determine how many days and how many runs that they were here. We can again say, oh, this airline had this many turns here on this particular date, which is super powerful because anybody who's ever had to do any kind of reconciliation work for these kinds of billing statements, maybe we say that you had 203.

The airline says it was 209, and then everybody just stands around with their hands out like this and they're like, "Well, which one was it really?" With this, I have literal GPS coordinates by registration number for those airlines that tells me exactly what ended up happening, which is at least. I used to be a financial analyst. If I had had that in the old days when I did that sort of work, I would have been so giddy with excitement. This is just super cool. Go to the next page here. This is the page that really was the genesis, if you will, of all of this stuff happening. Here you can see this is our beautiful runway. No additional context information here, but you'll see these little hexagons that we're showing here.

Let me just figure out how to make this a little bit bigger. Okay, that worked. If I mouse over this hexagon, you can then see that for whatever date range I have selected here right now, we had three instances where our runway was being crossed, and the main direction of flow was from south to north. The top airline that crossed here was American Airlines, and it even gives me the information of the flights as well as in the hourly crossings profile, the time frame during which those crossings took place. Anybody who actually manages looking at this operation on the airside operations side might be able to just look at this information and figure, okay, this happened in the way in the middle of the night. Didn't happen during our early morning push.

Not a big deal that we had those 3 runway crossings. Here we had a couple of more, and it looks like some of them might have taken place during the busy time. Again, this is the kind of information that we're getting from this geospatial data, which is super helpful, and it runs smoothly. Just it's being created every time that new data comes in, which is really, really neat for us to have. Oh, this is the other one. I love this one because it's so colorful and so busy. Our Chief Development Officer, Angela, who's awesome, came up with this great statement. I don't know if she came up with it, but I heard it from her, that an airport is a construction site with an attached runway, and you can kind of see that very nicely here.

What this shows you is the dwell time as well as the number of aircraft that are being captured. If you think of our operations, we've got our beautiful Terminal Two over here, and then we have Terminal One over here. In order to take off from SAN, what aircraft have to do is they have to drive along this taxiway until they get to the end of the runway, and then they fly off to whatever destination that they're going to. What you can see here is that rather than following the straightaway of Taxiway Bravo, they're taking this additional turn and deviation. Through the use of the system, we can actually see that there's a considerable amount of additional slowdown that happens as they're taking this turn here.

Previously, if you were to ask anybody, "Well, what's the impact going to be when we finish this construction? They can start driving straight again here." I'm sure that people would have been able to come up with very good, very intelligent numbers. Our operations people are fantastic. They really understand this stuff. But it's just so nice to be able to look at the data and actually be able to substantiate that information, again, with actual measurements of the before and the after state. Here we have this, it's a nice little chart that shows us the number of departures and arrivals.

Again, this is an aggregate that just comes straight out of the geospatial data that we're analyzing here, where you just configure that a departure is an operation where an aircraft accelerated in velocity on our runway and then gained an altitude, and an arrival is the exact opposite. An aircraft descended in altitude and decelerated also on our runway. We're capturing all this data, just using those GPS coordinates. What we're able to do, because Snowflake is so nice and easy to use when it comes to integrating information, and I'll just give a shout-out here to Cortex Code and Coco, because with that, we were able to just quickly say, "Hey, take this information and then correlate it with the number of departure seats that we have," knowing the airlines' aircraft configurations, as well as the TSA throughput that we have.

You can see here that we have all of this data integrated together. What you can do with that, and let's just zoom in here on this date, is you can also plot out how many people do we estimate currently, based on what we know about our load factors, to actually be in our airport right now. TSA opens here around, I think, 5:00ish, or something along those times. You can see that the TSA influx here in the morning starts, and so we had 3,400 people cross through TSA in the 5:00 to 6:00 hour. In the 6:00 to 7:00 hour, we actually started taking a departing number of passengers. Out of here, we estimate that that was about 3,100, and again, that's based on the operational data that we're getting from the system.

From that, we can then estimate how many people were actually in our facility. If we wanted to, we could break this down for each one of our terminals, and if we wanted to get really granular, even on our terminal areas, so terminal to east or terminal to west, which is super neat.

Jennifer Wu
Product Marketing Manager, Snowflake

That is really amazing. I feel like I can sense the excitement that you have for this new product through the screen, and it just makes me really happy to see our customers happy and able to do all these cool analytics that previously was very difficult to do. I learned so much today. Not only can geospatial data help with financial reporting, how to bill someone. No more fights between partners and the airports. I think that's already another use case that we can talk about. I just really want to thank you so much for your time and going through this demo and I look forward to checking out that new terminal at San Diego Airport next time I head over there.

Andre Bruckner
Business Intelligence Director, San Diego County Regional Airport Authority

You definitely should do that. Thank you. We are super excited to have this opportunity to collaborate with you guys on this. Just being able to turn around a lot of these things. We have reporting on runway friction because aircraft, obviously, when they're decelerating or when they're landing, they have to be able to brake. Just being able to turn things that previously would have taken us maybe days or weeks. With this, we can now have a solution that shows this data stood up in an hour or two, which is mind-blowing.

Jennifer Wu
Product Marketing Manager, Snowflake

Yeah. That's insane. Time savings is always great. This way you have more free time to do other things.

Andre Bruckner
Business Intelligence Director, San Diego County Regional Airport Authority

Yes, exactly.

Jennifer Wu
Product Marketing Manager, Snowflake

All right. Well, thank you again so much. I hope you have a great rest of your day.

Andre Bruckner
Business Intelligence Director, San Diego County Regional Airport Authority

Same to you. Take care.

Jennifer Wu
Product Marketing Manager, Snowflake

All right. In the last few minutes, I'd like to turn it over to Becky for her to showcase another quick demo. Take it away.

Becky O'Connor
Senior Architect, Solution Innovation, Snowflake

Hello, my name is Becky O'Connor from the Solution Innovation team, and I'm going to present to you the fleet intelligence solution. Imagine your organization operates vehicles that makes trips every day. As those vehicles move, they generate pings, for example, every 30 seconds, which is a very typical pattern of most moving assets. In many cases, the business needs to know when a vehicle will arrive at its destination, and the estimate changes continuously as the vehicle moves. The traditional approach is that every time a new location arrives, you call an external routing service to get an updated travel time and use that response in your application. You might show to an end customer, provide it to a driver, or even trigger an action when the vehicle is getting close.

For one vehicle, that usually does not look expensive, but at scale, those calls do add up quickly. Even on the low end, if one vehicle sends 2 GPS pings per minute over an 8-hour workday, that is roughly 28,000 pings per month, or about $30 per month for routing requests for just one vehicle. At 100 vehicles, that becomes roughly $3,000 per month. At 500 vehicles, it becomes roughly $15,000 per month. The first use case is straightforward. You want some kind of routing analysis, but at a much lower cost. One option is to run a routing engine inside of Snowflake. Instead of paying for an external API on every request, you are paying for a compute, which can be a much more cost-efficient way at fleet scale. There is a second use case. Some teams do not want to lower-cost routing.

They need a very low latency. They need to react immediately as each new GPS location arrives. In those cases, even calling a routing engine inside a workflow can still add up too much overhead. That is where the second use case becomes valuable. Instead of calculating the route from scratch every time, you pre-calculate travel times across the operating system area by splitting into hexagons and calculating the travel time and distance between them. Now, instead of asking a routing engine a new question on every ping, you already have a lookup table. If the vehicle is moving on one hexagon and the destination is in another, you already have an estimated travel time and distance. That gives you a much faster response for production use cases where latency really matters. There is a trade-off.

Smaller hexagons give you much more precise estimates, but they also create more pairs to calculate. Larger hexagons reduce precision, but they make computation lighter. There are two valid use cases. First is routing analysis in Snowflake at a much lower cost than external providers. The second is very fast, low latency travel time estimation for time sensitive delivery or dispatch operations, where pre-calculated hex grid travel matrices are a better fit. When this solution is installed, which takes no more than 30 minutes with a simple Cortex Code skill, it does not just create a sample application, it installs a baseline routing capability into a Snowflake account, including the underlying tables, pre-computed data where relevant, and functions for distance and time travel calculations.

You can start with a baseline solution immediately and then extend it with your own data to support analytics, operational workflows, and near-time decisions made inside of Snowflake. Thank you.

Jennifer Wu
Product Marketing Manager, Snowflake

Thanks, Becky, for that amazing demo showcasing geospatial solutions via Cortex Code. Before we all let you go, if you're interested in testing out the solutions we showed today, scan these QR codes and you can get started. Thanks.

Ryan Rush
Senior Partner Marketing Manager, Snowflake

Hello, everyone, and welcome. I'm Ryan Rush, and I lead partner marketing at Snowflake for our SAP partnership. Today, we're going to unpack how this new partnership with SAP Business Data Cloud is helping customers get more value faster from their SAP data by combining it with full enterprise context in Snowflake and then activating it for advanced analytics for enterprise AI, all without replicating data. To do that, I'm joined today by Sanjay from our product team.

Sanjay Nagamangalam
Senior Manager, Product Management, Snowflake

Thanks, Ryan. Hey, everyone. My name is Sanjay Nagamangalam, and I'm a product manager at Snowflake. I've been directly involved in building our new zero copy, bidirectional integration with SAP BDC.

Ryan Rush
Senior Partner Marketing Manager, Snowflake

Sanjay, this new partnership, it's getting a lot of attention. You've built this integration, you've worked closely with SAP product and engineering to do so, and we've already had leading customers like AstraZeneca and Siemens Energy directly involved and beginning to adopt. Now we're getting close to officially releasing this as two new product offerings, so every customer can take advantage. Just to recap those, there's first for SAP customers that are new to Snowflake, there's SAP Snowflake, a solution extension for SAP Business Data Cloud, and for existing customers, they can take advantage of SAP BDC Connect for Snowflake to unlock the integration. With that said, my first question to you, Sanjay, is why should every enterprise using SAP be excited about this partnership?

Sanjay Nagamangalam
Senior Manager, Product Management, Snowflake

Excellent question. When I think about why this really matters, Ryan, especially right now, I think about a few different things all happening at once. Number one is customers want to move faster from data to value, and then they want to spend less time building engineering pipelines and infrastructure to manage their data movement. They want far more confidence in the data that they are using, and they absolutely need to use SAP data for higher value analytics and AI, not just reporting. Now, this partnership with SAP and Snowflake delivers all of this. SAP Business Data Cloud is empowering SAP teams to leverage data products created from data across S/4HANA, SuccessFactors, and more, which are business ready. We are operating with a zero copy architecture, which means there's no infrastructure or no additional tooling to buy and configure and maintain.

Governed SAP data products come with full business semantics from the get-go, from the start. In Snowflake, customers can combine SAP data with the rest of their enterprise data estate to create a fuller picture of the business, which is what is needed to deliver high value analytics and AI. Now, historically, SAP data has been incredibly valuable, but difficult to activate broadly across the enterprise. Together with SAP Business Data Cloud, we've solved this problem, and we are setting up customers to accelerate their AI transformation. The net result, of course, is for customers, it's better business outcomes without the pain points of plumbing, so to speak.

Ryan Rush
Senior Partner Marketing Manager, Snowflake

Yeah, that's an amazing point. Would you say the real value here is not just accessing the data, it's getting value faster with less effort?

Sanjay Nagamangalam
Senior Manager, Product Management, Snowflake

Exactly. We've taken away the data pipeline infrastructure plumbing and replaced it with zero-copy data access in both directions. We've made this data available with rich business semantics in context in Snowflake. This means customers can enjoy data products shared from SAP Business Data Cloud that are ready to consume in their AI workloads right away. Our primary goal with this integration is to meet customers where they are and help them get their jobs to be done.

Ryan Rush
Senior Partner Marketing Manager, Snowflake

That's great. Thanks, Sanjay. I think that perfectly sums up why every SAP customer should be excited about this, and that leads to my next question. A lot of customers, they already have established ways of bringing SAP data into Snowflake, and they've been doing it for years. It actually was a big inspiration for us to build this partnership and allow more customers to take advantage of this proven pattern. Just want to ask, what's fundamentally different here with this new partnership? And can you share how this is different from the way customers have traditionally worked with SAP data in Snowflake?

Sanjay Nagamangalam
Senior Manager, Product Management, Snowflake

Good question. The traditional model focused on moving raw data first and then manually reconstructing meaning from it later. Needless to say, this created a lot of downstream work, engineering, transformation, reconciliation, redefinition, data governance, data sanity, and all of this actually delayed customers from getting value from their data, which is more critical than ever with the drive to demonstrate AI value quickly. With this partnership, customers can get governed semantically rich SAP data products. This is especially important for AI, because AI depends on trusted business context, not just raw records. This reduces ambiguity, improves consistency, and makes it much easier to support trusted analytics and AI outcomes. The real difference is not simply less data movement. The real difference is the data has more meaning and is more trustworthy, and it's available to your AI workloads when they need it.

Ryan Rush
Senior Partner Marketing Manager, Snowflake

They don't have to move data, it's available instantly, and there's even more value from it. I mean, with that said, is it fair to say the semantics are the major enabler that's allowing customers to deliver not just higher value outcomes, but more trusted outcomes like in, say, analytics and enterprise AI?

Sanjay Nagamangalam
Senior Manager, Product Management, Snowflake

Absolutely. In fact, that is a major advantage of SAP BDC and why this integration is so powerful. Now, it's not a matter of just getting data out of each SAP system like S/4HANA and SuccessFactors or Ariba, or even legacy ERP or BW systems. SAP BDC is giving customers a streamlined path to make both data and semantics available as data products that can then be shared with Snowflake via zero-copy. Customers get a unified semantic layer and a common understanding of what the data product actually means and the relationships between that data and other data products. In Snowflake, customers can blend this rich SAP data with its semantics with the rest of their enterprise data, which then gives AI, analytics, and all their business users not only a complete picture of the business, but the same picture.

Ryan Rush
Senior Partner Marketing Manager, Snowflake

Yeah, that same picture, that distinction, I think is really important, and we've heard that from many customers. They want their people, their AI operating on the same single source of truth. I view this as not just another new integration pattern, it's a better business context pattern. Now leading to my next question. I think at this point, it'd be really helpful for customers to actually see the experience, because once they see the semantics, the governed data products showing up in Snowflake, I think that's where the value becomes a lot more concrete. Sanjay, can you please show us what this looks like in practice?

Sanjay Nagamangalam
Senior Manager, Product Management, Snowflake

I would love to. A few points here. First, there is no replication of data from SAP BDC to Snowflake. Next, the data products shared from SAP BDC to Snowflake are available in Snowflake as catalog-linked databases that contain externally managed Iceberg tables that essentially point to the data sitting in the SAP BDC object store. Snowflake only fetches the metadata about these shared data products. The actual data is fetched only when you query those tables in Snowflake. Lastly, when these catalog-linked databases are created in Snowflake, we also interpret the semantics of the shared data products via what they call the Core Schema Notation, the CSN metadata, to create semantic views in Snowflake.

This is what captures the business meaning and the semantics of these data products, facts, dimensions, KPIs, synonyms, relationships, and this preserves the value of the SAP data products. This is much more than just raw extracted data. The shared data and semantic views can be immediately used in analytical workflows and AI agents such as Cortex Analyst, and customers can begin from a trusted starting point and use existing capabilities such as RBAC in Snowflake to govern access the shared data. Essentially, customers start from business-ready data, and not from reconstruction work. I'll show you a few different tabs here. All right, I'll first go to the SAP BDC cockpit, and I'll share a data product to Snowflake.

Now as you can see, I'm in the SAP BDC cockpit, and there are a number of data products available to me to share with my Snowflake account. I've added a couple here to my favorites. The two here are customer and interview journal entry. Now, sharing a data product with Snowflake is very, very simple. I click on the share button. I can click on available systems to go look at my target system there. I select a system, and I click share. That's it. Now, the moment I click share, this data product is immediately available for me to consume in my Snowflake account. Let me head over to my Snowflake account, and I can actually look at what this database looks like.

In order to consume the shared data product, I create a catalog-linked database in Snowflake, like I told you before. I can actually check the status of the shared data product here. It's been shared. The data product is available as a catalog-linked database. I can look at schemas. I can even look at tables in the database. I can see that I have a number of tables here. If you notice, the tables actually came from here. These are the tables that are made available in Snowflake because they came from the data product. I can actually look at Database Explorer right here, and the customer database is available in the Horizon Catalog ready for me to use right away. I can look at tables, and I can pick a table, and I can look at the data in it.

All of this data, Snowflake is actually querying on demand. The data is still sitting in SAP BDC, and Snowflake is querying on demand. This data is available from the get-go for me to consume right away. Now, recall I also said that we create a semantic view. In this case, what I've done is I looked at the data from the database, and I created a semantic view in Snowflake that captures relationships and dimensions and facts, and KPIs, et cetera. Let me look at what that looks like. This is an example of a semantic view that I created from the shared data product called Customer. If you notice, it's a pretty standard semantic view. It also has things like comments, it has things like synonyms, and these are the key facts that are very, very useful for AI agents, right?

I mean, given that I have this database here, I can actually do things like select from this table right there. As I run this query, this is what makes Snowflake query the data on demand and fetch the results.

Ryan Rush
Senior Partner Marketing Manager, Snowflake

Yeah, that's fantastic. Having all of that set up, or maybe you did it behind the scenes, but you didn't build infrastructure, you didn't move data, you didn't reconstruct the business meaning and semantics, KPIs, plus benefit of governance being built in from the start. All of that's incredible. We've talked about this before, but what's also really valuable is that it means customers can focus immediately on getting value from that data in Snowflake. My next question for you to demo a little bit more is with Snowflake and with SAP Business Data Cloud coming together, you have this unified foundation. How does that change things for the business, so you can access data or derive insights or have more people make better decisions faster? Maybe you can show us that.

Sanjay Nagamangalam
Senior Manager, Product Management, Snowflake

Excellent question, Ryan. Now that the plumbing is essentially done for you, let's see what you can do with that data. I'll do a very quick demo of a use case. Imagine, if you will, that I am a business analyst, right? I want to analyze some KPIs from my customer and journal entries data that was sitting in SAP S/4HANA. Now, recall I said that SAP BDC makes data available from downstream systems such as SAP S/4HANA as data products in BDC. Those are the two data products that I shared earlier, right? Imagine, if you will, I want to understand, let's say, who are the top-performing five customers with the most transactions and highest revenue, right?

Now, like I showed you in the earlier demo, the data from customer and journal entries in S/4HANA is available as curated data products, thanks to BDC right now. I use the zero-copy integration to share those two data products with Snowflake as catalog-linked databases. I also have semantic views from those two data products. Here's what I can do next. What I can do is I can create a Cortex Analyst tool, that then can be wired up to the semantic view that I just created. I shall give you an example of that. I'll go to my Agents tab, and I have an agent here, I have a Cortex agent here. It has one tool, the Cortex Analyst tool. What it's wired up to is that semantic view, the customer revenue analysis semantic view that we'd created earlier, right here.

The Cortex Analyst tool is wired up to the semantic view tool. The Cortex Analyst tool is then used by the agent. Now my agent is ready to be consumed in tools such as Snowflake Intelligence. Let me just go and fire up Snowflake Intelligence here. I can ask it my question, my top-performing five customers with the most transactions. Now what Snowflake Intelligence is doing, it's, of course, analyzing my request, but it's using the agent that I created. That is using the tool, the Cortex Analyst tool that I created, which is then using my semantic view that is wired up to it and figuring out the business relationships between those two data products that I shared. In fact, it's actually showing me the SQL that is going to go around. Let it think for a minute. Great.

It's inspecting the values that it received, and recall, it's actually joining data on the fly from these two tables across these two data products in Snowflake, all via Zero Copy. No data is being moved at all. All of this is being queried on demand. Great. It gave me a nice table there. It's actually telling me which countries and which states and things like that. It also created a visualization for me, right? Because one of my hints was, could you create a bar chart for me? I mean, imagine this.

All the way from zero-copy data sharing from SAP BDC to Snowflake, creating catalog-linked databases, creating the semantic views on top of those catalog-linked databases, wiring up that semantic view to a Cortex Analyst tool, and then to an agent, and consuming it in Snowflake Intelligence in about 3 minutes. That's what the demo was all about.

Ryan Rush
Senior Partner Marketing Manager, Snowflake

That's amazing. Thank you, Sanjay, for walking through setting up the integration and putting the data to work in Snowflake. I have one more question for you just to wrap up this session. There's always questions. How does this work under the hood? Because for many customers, that's the next level of understanding for this integration. Maybe you can just do a dive into how this integration is built from a technical perspective.

Sanjay Nagamangalam
Senior Manager, Product Management, Snowflake

Fair point. Let me talk about both directions, SAP BDC to Snowflake and Snowflake to SAP BDC. Both directions are zero-copy. First direction. Essentially, the story begins with establishing a trust relationship between SAP BDC and Snowflake by creating that catalog integration that I was telling you about in Snowflake, and then enroll it with SAP BDC. Once that trust relationship is established, then Snowflake essentially talks to SAP BDC for vendor credentials for access to their data store, file store in their data store. Now, back in the BDC cockpit, like I showed you, customer shares the desired data products from BDC to Snowflake. In Snowflake, for each shared data products that you wish to query or consume, you simply go create a catalog-linked database. This essentially does two things under the covers.

It creates the catalog-linked database, of course, but then it also creates what we call externally managed Iceberg tables within the catalog-linked database that points to the data sitting in SAP BDC. It refreshes metadata. No data is actually replicated. It also creates a semantic view from the Core Schema Notation metadata. Now, at this point, the data products from SAP BDC shared with Snowflake are ready to query in Snowflake like I showed you in the demo. The cardinality is one shared data product from SAP BDC is equal to one catalog-linked database in Snowflake. That was the first direction. The second direction from Snowflake to SAP BDC essentially involves three high-level steps. Now in Snowflake, customers start with a database that they wish to publish as a data product in SAP BDC.

The requirement, of course, is the database should have Iceberg tables for the data that you want the data product to be composed of, for example, enrich data from Snowflake that you wish to activate or query in the SAP BDC ecosystem. For example, Joule. That's step number two. Step number three is you essentially call the SQL constructs that we provide to publish this database as a data product to SAP BDC, which then does two things. It first calls the SAP BDC APIs to publish the database from Snowflake and make it available as a data product in that SAP BDC cockpit. Imagine one more tile in the set of tiles that I showed you.

The second thing it does is it creates the Core Schema Notation, the CSN and metadata on the Snowflake side for this particular data product, and then it calls the SAP BDC APIs to publish the CSN into SAP BDC alongside the data product. Essentially at the end of this exercise, the database from Snowflake is published as a data product on the BDC side, along with a Core Schema Notation description of the metadata, which includes things like relationships, dimensions, facts, the same thing. Essentially, zero-copy on both sides, and business-semantically rich on both sides. At the end of the step number 2, the data product published from Snowflake is ready to be consumed in the SAP BDC ecosystem like any other custom data product. The cardinality is the same. One database in Snowflake equals one shared data product from SAP BDC.

I mean, TLDR here is customers should think of this as a governed consumption model, not just data movement. Performance, of course, depends on usage patterns, architecture, region, and data profiles. Large case scenarios should be evaluated with the customer's actual workload in mind and use cases in mind. The key here is aligning the architecture to the use cases and the value target. This model is not about reducing unnecessary movement and improving how customers consume high-value data more seamlessly. Back to you, Ryan.

Ryan Rush
Senior Partner Marketing Manager, Snowflake

That's wonderful, Sanjay. I think you answered 100 different questions in that explanation and walkthrough of how this all works, and of course, zero-copy in both directions. Well, Sanjay, it's been a pleasure talking with you. What I hope customers take away is that we've designed this partnership with SAP Business Data Cloud and the zero-copy integration so customers can shift their focus from managing data movement and building infrastructure and maintaining that to activating their mission-critical SAP data in Snowflake within the context of the rest of their enterprise data. They can immediately focus on getting value from that data for analytics, for AI, and more. Just like the example you showed, the ability to talk to the data and have these models completely understand what it means and give them incredibly helpful and fast answers to make better decisions.

I wanted to just quickly plug. We launched our e-book today about this partnership. If you want to go deeper, go ahead and grab that and check it out. Thank you again, Sanjay, for being here.

Sanjay Nagamangalam
Senior Manager, Product Management, Snowflake

Thanks, Ryan. Thanks, folks.

Vino Duraisamy
Developer Advocate, Snowflake

Hey, everybody. Thanks for joining this webinar today. I am going to talk about how can you power your analytics and AI workflows on your lakehouse using the power of Snowflake's simplicity. I am Vino Duraisamy. I am a Developer Advocate at Snowflake. As you heard earlier today, whether your data is already in Snowflake, or if you want to move your data into Snowflake, or if you want to work off of the data that is outside of Snowflake, regardless of where your data is, you can leverage Snowflake's powerful vectorized compute engine to run your analytics and AI workflows.

Now, the focus of today's session, however, is on your data living in a lakehouse external to Snowflake, and how do you bring the power of Snowflake over to your data? Before we dive into that, let's take a quick look at what the reality is for the data teams today. Right? Of course, we do expect this ideal scenario where we have this data in a unified format or layer that is accessed by different teams for different workloads, use cases, and so on. Unified, governed, and is able to serve these different teams just for different use cases. Life is good. However, that's not what the reality is in most of our data teams today, right? Data is stored in really different data sources, like cloud storage, siloed across the data estate all over the place. Data is literally everywhere, right?

Us, data teams, are having to scramble through to find the right data for different use cases. Nobody is working off of this one unified definition or copy of data, so there's a lot of redundancies, missed SLAs, and also a lot of effort to bring them all together before you could really benefit out of the data and use that data to derive business insights. Now, to solve the data silo problem and to have that unified layer of data is where Iceberg came in. What does it mean? Well, even if your data is spread out across your data estate in different cloud storages, lakehouse, well, specifically partly thanks to Apache Iceberg tables, allow you to talk to all of the data spread across your data estate as if they were all united or unified in one place, right?

Iceberg is that unifying layer on top of all of your data spread in your data estate, so these different data teams can work off of that one unified layer of data. Now, what exactly is an Iceberg table? How does it help achieve that unifying fabric or unifying layer for your data lakehouse? If you look at the structure of what goes in an Iceberg table, first it has data files in the form of Parquet, because most of the workflows that we would run for analytics are columnar workflows, so Parquet format being a columnar format is very helpful and efficient to make your queries faster, right? Then Iceberg table also has metadata that essentially is like, where does the specific data live? Say if I want to query like, what is the revenue in 2025 for North America region?

You need this specific filtering of what data I need to access to answer the query, and that is possible with the metadata. Now, the catalog itself is not part of the Iceberg table, but catalogs work with the Iceberg table, and those catalogs eventually essentially give and allow these compute engines or these query engines and point them to where the data files are. You will get a better understanding of these when we actually see some example. Essentially, the raw data files are in the Parquet format, and then you have the metadata sitting on top of it, and the catalog essentially helping find the query engine where the right data is to run a specific query.

Because of this, firstly, it is engine-agnostic, meaning you can have any query engine, like Snowflake, Spark, or Trino, query the data in Iceberg tables through these catalogs and metadata. They're highly performant because now we are able to bring the power of these query engines, like Snowflake's vectorized query engine, to query this data. You can have these high concurrency even in petabyte scale performance. It is community-driven because, of course, there is a lot of vendors who have been investing into the Apache Iceberg table format, making it truly community-driven, so there is no vendor lock-in at all. Now, lakehouse, of course, came to solve that problem of data being scattered across the estate and how we can sort of build a unifying layer across that data. It is not without its own challenges, right?

Even implementing a lakehouse is a bit sophisticated and complicated, and it also comes with its own pitfalls. Like say, for example, when I want to implement a security control, like an RBAC or a data masking policy, for example, where am I going to implement these security policy? Should it be at the file level? Should it be at the metadata level? Should it be at the catalog level or the query engine? Because now we have these different layers within this Iceberg table, different engines and different catalogs implement these in different ways, right? So it becomes a little cumbersome for us to really figure out a unified way to implement the security and data sharing capabilities on top of this.

Now, on top of that, missed SLAs, if their different teams are working from different query engines, and they have different business definitions, and so there is missed SLAs and sort of, I want to say loss of trust and communication too when you do things like that, and there's no unified sense of definition of what we are looking for in the data. On top of that, like I said, if you have different catalogs and different query engines, they're all operating differently, as in how is this query engine implementing RBAC versus how is Iceberg tables implementing RBAC with a different catalog? It's just the amount of tools, catalogs, and query engines sort of made it complex for folks to manage this lakehouse implementation. Now, how does Snowflake solve this implementation problem or this complication problem, right?

Snowflake is historically being known for its simplicity and bringing simplicity to the operations, whatever it may be, right? Now, how is Snowflake bringing the same principles of simplicity into lakehouse analytics? Well, these are built on these three main pillars, being access, performance, and empower. Access is really securely unifying all of your data in one place without having to move the data. That's the core premise of this whole conversation, right? How do you unify and secure all of your data in your lakehouse without having to move anything anywhere? Well, Snowflake helps you with that. The second thing is performance, right? As the number of users grow, as the number of the terabytes of data keep going up, how do you make sure the performance are not degrading, right?

Snowflake's vectorized compute engine is really battle-tested for decades in real-time systems and production systems that it's running. How do we bring the performance of Snowflake's compute engine over to your data in lakehouse to give you that performance and reliable concurrency, even at a few petabyte scale datas that we're talking about? Now, in this process, how does Snowflake empower the data teams to not just build and deploy demo products, but also really robust production-grade enterprise-ready data products? Now, you have those three pillars, but what does it really mean in a data setting in your data stack, right?

If you have, say, Iceberg tables managed by Snowflake, wherever they may be, or if you have Iceberg tables that are externally managed by any of these Iceberg REST supported catalogs, or if your data is in any other format, like say Delta or other data formats as well, regardless of where your data is and who is managing the data and where it's stored, you are able to work off of that data from Snowflake. What does Snowflake give you? Like I talked about before, it is able to give you high concurrency, even at high petabyte scale data that we're talking about. The elasticity of the compute engine is known to everybody, like just increase and decrease the compute as and when you need for a specific query without human intervention having to sort of go up and down.

How do you also provide isolation between all these different data? Because sometimes it is important for one team to not be able to see the other team's data. Like, I don't want to see the finance data, I work in marketing, for example, right? How is Snowflake able to give those isolation within these different teams through RBAC or tagging or masking features and so on? Again, how does Snowflake implement this access performance and empower principles or pillars into the product? By really providing you with all these features, such as role-based access control, and you have tagging to sort of tag those highly sensitive columns and data, or mask, like enabling row-level masking on specific column names or column like PII data, for example.

How does Snowflake, through these granular masking, tagging, and role-based access control capabilities, able to help you with access performance and empowering the data teams at scale? Well, this is not something that is new, that we've seen before already. The one pillar that I want us to see is the Semantic Views Autopilot. What we saw before was primarily focused on the analytics side of the world, right? For the AI side, it's not just enough to build this unified, secure layer of data on your lakehouse. It's just not enough to build that unified layer of data, but it's important to build, on top of that unified layer, the business context, the shared business context for all of the query engines, for all of these catalogs, so they know what these business definitions mean.

All of the different teams can really just then use that unified, secure layer of data, right, without an agreed-upon shared business context. Because the revenue could really mean different things in different teams, right? This whole business context, shared business context, is important, and semantic views give you that shared business context. With the Semantic View Autopilot, Snowflake gives you this head start on how do you create these semantic views all by itself. You can use the data schema into a transformable or a shared business logic that you can use across the entire data organization. Now, with that said, this is a very interesting benchmark.

As you can see, over a period of time, less than a year, how Snowflake has been really working on improving the performance on Iceberg tables, be it externally managed or Snowflake-managed Iceberg tables, and making them really faster compared to the Snowflake's own native tables and so on. Almost twice as faster, if not more, for both these cases. Now, I know we talked a lot, but the key takeaway for us from this entire conversation was that it does not matter where your data is. Even if your data is external to Snowflake, sitting in a cloud storage, you can use the power of Snowflake's vectorized compute engine to query and work with that data without having to move that data at all, while the security and governance boundary is still in place.

You can use Snowflake to have that security governance boundaries on your data that is stored in an external cloud storage. Now, let's go into the demo. I'm going to show you an exact same demo of what we talked about. I have some example, like an imaginary donut company has this donut orders data sitting in an Iceberg table in S3, and I have an Iceberg REST catalog, which is AWS Glue in this case, that really manages the Iceberg tables in S3. On the Snowflake's end, not going to move the data. I'm going to create a catalog-linked database that allows me to connect or point to the data in S3 and work off of it. We're going to run a bunch of analytics queries on top of the data and see what life looks like on the other end. Right?

This is the demo flow. Before we get into the demo, I just want us to freely just talk about this one specific feature of Snowflake that allows us to do this, right? This is called a catalog-linked database. It's just a bidirectional connection to your Iceberg tables that is external to Snowflake, and specifically, if it is managed by a remote Iceberg REST catalog, life is great. What does that mean? Well, by creating a catalog-linked database, you can go from creating a database in Snowflake and point it to a specific catalog that manages all the Iceberg tables that you have. Well, how does it really help? Well, without this catalog-linked database, you have one Iceberg table in S3. You register it to Snowflake, so now Snowflake knows how to talk to that and query the data for you.

Another Iceberg table, register. Snowflake knows how to query that Iceberg table and give you the answers, right? In our data teams, I mean, we have tens if not hundreds of Iceberg tables. You can't just keep doing this for hundreds of tables. To simplify that process, we have this catalog-linked database that points to your Iceberg REST catalog. All of the Iceberg tables managed by that catalog, you can work off of it from Snowflake directly. It's just like a one quick setup, simplifying the setup experience for us. Now, with that said, let's dive right into a demo. I want to show you where my data is. It is in a S3 bucket. This is what my S3 bucket looks like. Donut demo data, Iceberg tables. I have donut orders and donut products. You can see the metadata, like the JSON and Avro files.

We also talked about the data in Parquet format because columnar data and whatnot. Great. Looks like I have two tables, and let's go to Glue side of things. Within the Glue database, I have four of them. Donut DB, and I can see the two tables are being registered here from the Glue side also. Apache Iceberg table under Donut DB. Life is good. I already have this data here in AWS. I will log into Snowflake and try to connect to this Iceberg table, like these two Iceberg tables that we saw. Right? How do we do that? It is a three-step process. Right? First, let us connect particularly to those Iceberg tables in S3. This is the S3 path. Create an external volume to point to those Iceberg tables in S3. Just the data itself, not the catalog.

Create an external volume to point to the data. The second step is create a catalog integration to point to the Iceberg REST catalog. In our case, it is the AWS Glue, which is great. Once you connect to the data, enter the catalog. That is it. You create a catalog-linked database, and then you input the catalog integration that you just created and the external volume we created as well. That is it. In a three-step process, we are able to essentially allow Snowflake to talk to your data in cloud storage. Well, how cool is that? Right? Now, let's go to my. Oh, okay, I already have it here. In the Database Explorer, I want to see if my data is already here, which it is. As you can see, this one is a catalog-linked database.

That's why the hyperlink-like symbol, as opposed to the other database that are not. Within the DonutBindDB, I see these two tables, Donut Orders. Let's see the columns. Looks great. Similarly, what are the Donut Products look like? Awesome. Let's just go explore what the specific tables are and maybe run a bunch of analysis and see what life would look like. Now, I'm able to see all of the data from AWS S3 as Iceberg tables here in Snowflake. As you can see, these are just linked tables. These are not copies of data in Snowflake. We did not even move the data. The data still lives and breathes in your AWS S3 bucket in the Iceberg table format.

Now, I also ran a bunch of queries just to quickly run through and see what life looks like, what the data looks like in each of these tables. Let's quickly run through them. This is what my donut products are. It seems like I have 10 products with different attributes, price, calories. Okay, great. Da, da. These are the order data, quantity, customer, location. Looks like it's all good. We saw how we can run a bunch of exploratory queries on top of the data that we have in that catalog-linked database. Now, what if you can talk to your Iceberg tables in natural language? I mean, SQL queries are fun, but SQL is SQL, you know?

Well, you can do that now, thanks to Snowflake Intelligence, where literally in that AI agent, you could just go and ask, well, what is the most popular donut in terms of sales, I guess. As you can see, Snowflake Intelligence is working through the requests, and if you can even sort of see what it is doing. It's analyzing the request. It listed down the steps on what it needs to do. Well, guess who wrote the SQL this time for you, where you don't have to write it for yourselves? It has Cortex Analyst, of course, doing this for you, the text-to-SQL. It ran the SQL, generated the results, and it looks like the strawberry frosted donut is the most popular one based on the sales. Isn't this cool?

Now tell me, what are the top 10 popular ones, or how does it change over time? You can ask any follow-up questions as you want, and Snowflake Intelligence will spin up either Cortex Analyst, if it's a structured data or structured question, and Cortex Search, if it's a semi-structured or unstructured data to work through your request and to bring in your questions. Now, I can go on about how you create these agents. For example, let's go and search. I created a donut revenue agent. You can see it here. You can see what is the name of the agent? How is it working? You can even test it out while you're creating these agents. Right? You can have these instructions also on how this agent should respond to your queries, like be concise and present data clearly.

You can even give instructions like present the results with a graph and chart whenever possible. Really making it easy for your business users to talk to the data in the lakehouse. Again, you can do monitoring and who's querying these, how are they querying it? What are the popular questions? Evals also are available, meaning you can see. We've not done evals on this yet, but you can run evals to see where is the agent getting it wrong, and why is it getting it wrong. Really trace the errors and improve your agent over time. There's so much more you can do, but the key takeaway is that. Wow. Okay, we do have the top 10 most popular donuts. Looks like strawberry frosted, obviously we knew. Boston cream, maple bar. I am not surprised. Right?

Now, let's just take a quick step back and really review what we've seen so far in this demo. Right? As you saw, we had a bunch of Iceberg tables, donut orders and products stored in AWS S3, Iceberg tables in S3, managed by AWS Glue Catalog. Because it is one of the Iceberg-supported REST catalogs, we are able to create CLD, like a catalog linked database, from Snowflake to connect to that catalog, and in turn, those Iceberg tables. On top of those Iceberg tables, firstly, we run analytics and a bunch of aggregations on top of those Iceberg tables. We also used AI on top of it, like Snowflake Intelligence on top of your lakehouse data, so your business users can talk to it in natural language. We did not even have to move the data anywhere at all.

Now, you were able to bring in the power of Snowflake's compute engine, Snowflake's AI-rich features, like a Snowflake agent, Cortex Analyst, text-to-SQL, and Snowflake Intelligence all on top of your lakehouse data, because thanks to Snowflake. Right? You can also have all these governance and security practices implemented, too. If you have RBAC on top of your Iceberg tables, like row-level masking and tagging your personal and sensitive data and all kinds of security and governance aspects of it. That was the demo today. I hope this was helpful for you, and thank you so much for joining.

Powered by