Good day, everyone, and welcome to the Oracle Executive Access for Investors Conference call. Today's call is being recorded. At this time, I'd like to turn the call over to Paul Ziots, Director of Investor Relations. Please go ahead.
Thank you, Fat. Hello, everybody, and thank you for joining us today for Oracle's Executive Access for Investors, an educational webcast series hosted by Oracle. Today is Friday, April 27, 2012. Joining us today is Thomas Kurian, Executive Vice President, Balaji Yalamanchili, Senior Vice President, and Equity Research Analyst, Karl Kirstead, from BMO Capital Markets. Today, Thomas and Balaji will discuss Oracle's Exalytics in-memory machine. Please note that Thomas and Balaji will not be discussing any information today that is not already publicly available. At the conclusion of the presentation, we'll turn the webcast over to Karl, who will kick off the Q&A. You may submit a question at any time during the presentation by clicking on the Ask a Question tab above the webcast slides. Please keep in mind that we will not comment on business in the current quarter.
As a reminder, the matters we'll be discussing today may include forward-looking statements, and as such, are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC. Specifically, the most recent reports on forms 10-K and 10-Q, which identify important risk factors that could cause actual results to differ from those contained in forward-looking statements. You're cautioned not to place undue reliance on these forward-looking statements, which reflect our opinions only as of the date of this presentation. Please keep in mind that we're not obligating ourselves to revise, update, or publicly release the results of any revisions of these forward-looking statements in light of new information or future events. Lastly, unauthorized recording of this webcast is not permitted. I'll turn it over to Thomas.
OK, thank you, Paul. Today, I want to talk briefly about the Oracle Exalytics in-memory machine and Oracle's strategy for in-memory analytics. There are five important takeaways I'd like to make sure that customers and investors understand. The first is that in-memory analytics has made rapid advances because memory is faster, cheaper, and has more capacity. Second, Oracle has the most mature and best in-memory technology. We'll talk about what we're doing with in-memory technology today for business analytics and unstructured information processing. We've also got world-class technology for transaction processing. Oracle Exalytics is a complete in-memory analytics system. It covers six scenarios in the way customers use our software: operational reporting, roll-up, moll-up, planning and budgeting, and unstructured information discovery. I'll explain why each of those six opportunities are important opportunities for us to grow both our software and hardware business.
Oracle Exalytics is being adopted very quickly by customers because it offers a complete solution covering all the different types of analytic problems customers have. It gives amazing performance. It offers much better business intelligence as a result. It's significantly lower cost than alternate solutions in the market. Oracle Exalytics also solves these problems significantly better than competitors. In fact, it is the best solution in the market. I will show you technically why we're better than competitor products to solve these problems. Finally, I will also make sure I address one specific question that comes up very frequently: the reality that in-memory databases will not and do not replace many or all relational databases. A specific competitor, SAP in particular, has made a number of statements that they will use their product SAP HANA to replace Oracle.
That product in particular has many architectural and functional limitations, both in analytics and in transaction processing. While I normally don't speak about competitor products, in this case, I will speak about the competitive positioning and relative functionality between our products at the end of this presentation. Let's look at what in-memory computing is really about. OK? I'm going to start by saying why in-memory. It's basically, if you look at the period between 2002 and 2012, the last 10 years, in-memory capacity has increased 64 times. You can pack a lot more data in memory. The cost of memory has decreased sharply, meaning DRAM cost has gone down by 25 times in that period. In-memory response time, when you access data from memory versus going to high-density disk, is much faster. It's about 50,000 times faster than going to disk.
The combination of these obviously means if you can put more data into memory, you can run and get better analysis, reporting, planning, as well as better business intelligence. Now, in order to run an in-memory database, however, that stores data in memory, it's critical that you understand what are the important requirements that customers really need. There are two cases. One is for analytics. The other one is transaction processing. The obvious ones are the need to store data or cache data in memory, and then to store columnar data in memory in a specific columnar vector structure. There are other requirements. You need to be able to compress both row-shaped data and column data so you can fit more data within memory. You need to have indexes and a query optimizer.
The reason that these two are important is it's not the question of returning a query result fast. It is important that you get predictably fast query results, no matter how many people are using the system and what the access pattern on the data is. Even if it changes, you need to give predictable performance. Indexes and query optimizer are important for that. The next one is NUMA support. What NUMA support is really the following. Today, on a four-socket computer, you can store 1 TB of data in memory. If you want to put more than 1 TB in memory, for example, two TB, you have to go to an eight-socket computer. In an eight-socket processor, the memory is two hops away from each processor.
In order to be able to get fast access to that data, you need to support something called NUMA as an architecture in your database. If you want to address larger amounts of data, what we call scale-up in terms of data set, you need to support NUMA. Scale-out is the next thing. Scale-out says if you have 1 TB of data in memory, it's not just how much data you can pull into memory, but how fast you can actually access that. Scale-out is really about being able to use all the cores on a processor to pull data out of memory and process it. The critical requirements there are a capability called parallel query, being able to run multiple queries in parallel and very efficiently. Finally, for analytics, you need to support in-memory aggregations and result sets so you can store aggregates in memory.
You need to have a broad set of analytic functions in memory. You need to be able to support unstructured data. You have to be able to support all of these very efficiently in a high-performance way. For transaction processing, there are four additional things. You need to support writes and updates. Since people are doing updates, they need to be done very quickly and efficiently, high performance. If power goes out on the system, for reliability and availability, you need to persist data on disk. When you persist it on disk, you need to ensure transactional integrity, meaning that you've not got incorrect data. You need to support something called multi-version concurrency, which is when multiple people are doing updates at the same time, you handle the updates correctly and in order. These are the requirements for in-memory systems.
Now, for analytics, today I'm going to focus more on analytics. There are five real problems: operational reporting, which is reporting on information in real time; analyzing information, what we call relational query and analysis; online analytic processing, which is slicing data across multiple dimensions in real time; planning, which is comparing your information to your financial plan and budget; and information discovery, which is identifying patterns in the data you may not be aware of. Let's use an example of a company that's in manufacturing. Which of my custom orders will be affected by delays in my New York warehouse today? That's an operational reporting question. It's looking at real-time data. It's not aggregating that data. It's looking at the individual customer orders at what we call the transaction-level grade. How many of these orders had late deliveries last year? What's the current pipeline associated with them?
This is typically what we call a roll-up or relational query and analysis problem served out at a warehouse because it's aggregating the revenue and looking across multiple time periods. Summarize for me the number of orders, the key customers, and the revenue impact in every sales vice president and each geography. This is aggregating information and then slicing and dicing it by multiple dimensions, slicing it by key customer versus non-key customer, slicing it by sales territories so you can understand the business impact. How does the revenue shortfall affect my budget? What happens if I expedite the top five? This is comparing it to a budget and then doing a scenario model, saying, if I expedite the top five, will I get better results? Finally, have the customers who are affected by delayed complained on social media?
This is bringing unstructured information from social media and comparing it with structured information that you have. Now, why did I point this out? If you look at these five types of analytic problems, today people solve them in different ways. The performance requirement is different in each of them. Operational reporting, people are taking data from an OLTP system and moving it to an operational data store, which is a database against which they run reports. Query and analysis, they take it from multiple operation OLTP systems and put it in a data warehouse or data mart. Multi-dimensional roll-up, they take it from an OLTP system, bring it to an OLAP cube, and then use the OLAP infrastructure to do analysis. Planning and budgeting typically runs on top of an OLAP infrastructure.
Finally, unstructured information discovery, typically they're using a different tool that handles unstructured data for the process of analysis against it. Now, what's important to understand is the performance requirement in each of these cases is very different. It's not just about raw query processing speed. In operational reporting, the central question on performance is how fast can you refresh the data in the operational data store? How fast do you respond to queries? If you look at query and analysis, it is how fresh is the data in my warehouse, so how fast do you get it there? How fast is the response to queries? Most importantly, how does it scale as I add more users and I add more data? That third point is not relevant typically in operational reporting because the volume of data you're looking at is typically real-time data.
It's not enriched and aggregated, so it's smaller data volumes. It's typically smaller user volumes, the volume of users querying the system. Let's look at a third example in the area of planning and budgeting. In planning and budgeting, how quickly can I recalculate the budgets? I said we're going to have a revenue shortfall. Can I recalculate budgets? The important thing there is recalculating budgets requires writes or updates. It's not just reads from the system. How quickly can I model what-if scenarios? It requires you to run calculations, not just queries. It also requires you to do updates. How fast can I generate management reports? This is typically management reports up and down an organization's hierarchy. It requires aggregation and hierarchical aggregation.
The important point I want to make sure you understand is in-memory analytics solutions need to address each of these distinct performance requirements, not just raw query performance. On the next slide, I list some of them. I won't go through each of these. It's just to give you a sense that there are very different requirements in each of these domains. A high-performance in-memory analytics solution needs to address every one of these requirements in order to be efficient. For example, in multi-dimensional OLAP, it has to support fast online cube builds. It has to support fast aggregation and aggregate calculation. It needs to support fast scenario modeling. It needs to support large volumes of users and large volumes of data. We call that user and data scalability. As an example, all of those requirements need to be solved by a multi-dimensional system, not just raw query response speed.
Let's now look at Oracle Exalytics. Oracle Exalytics is a product that we introduced to address this entire range of problems. In other words, to make the performance, scalability, and interactivity that users have in all five of these different types of analytic problems exceptionally great. If you look at the system, it consists of a hardware box, which is a standard Linux x86 system. It's four-socket, 40 Intel Xeon cores with 1 TB of DRAM. That's the hardware. On top of that, we have six pieces of software. I'll highlight some of the important ones.
First, for data management on Oracle Exalytics, we provide three engines: TimesTen, our in-memory database, which can be used for relational OLAP and operational reporting scenarios; Essbase, which is a high-performance multi-dimensional OLAP system targeted for the multi-dimensional OLAP and planning and budgeting use case; and then the Endeca in-memory MDEX server for unstructured information discovery being done in memory. That's the data management piece. Then there's the analytic tools layer. There are three of them: Oracle Business Intelligence, Hyperion Planning and Budgeting, and Endeca Information Discovery. The reason I pointed this out was if you compare this with SAP HANA, SAP HANA really competes only with TimesTen in-memory database and Essbase in-memory OLAP. It does not include the analytic tools, which are in red, nor does it provide a solution for unstructured in-memory analysis.
Oracle Exalytics is an engineered system combining the hardware plus the software components to provide unbelievably great performance. Now, let's look quickly at the hardware. The hardware is very simple. As I said, it's four Intel Xeon processors, 40 cores, 1 TB of DRAM. For networking, we have two types of networking. Externally, we talk Ethernet and Fibre Channel. We also have InfiniBand connectivity. This is particularly useful when we connect Exalytics to Exadata. If you've got a data warehouse running Exadata, it gives you unbelievable performance when you connect Exalytics to Exadata. Operating system, it runs standard Linux. Now, I'll touch on to give you a flavor of some of the optimizations we've done. I'm just going to touch on two pieces of the software stack: Oracle BI, TimesTen, and Essbase. Oracle BI is our analytic tool.
To compare it, think of the alternative products out there: MicroStrategy, Cognos, or Business Objects. Oracle BI is a very powerful query and analysis tool. We've made a number of optimizations and architectural changes for in-memory. There are three of them that are important. The first one is something called Summary Advisor. Summary Advisor is a technology in Oracle BI that says, as data is passing through the system, I'm going to scan it. I will tell you what's hot, and therefore, I can give you guidance on what to pull into memory. Number two is super fast parallel query execution. Oracle Business Intelligence has the ability to pull in a lot of the cores in the system and can parallelize the execution of query processing, sending data to the database, bringing it back, and then running multi-path calculations across all the cores in your system.
That gives you great scale-out capability. It also has its own in-memory cache, where it can store data, metadata, and frequently accessed information. The combination tells you what data you need to pull into memory. It automatically pulls it in for you, and it scales out very efficiently to use all the cores in the system. The second piece of software is Oracle TimesTen In-Memory Database. TimesTen is the market-leading in-memory database in the market. We've got over 4,000 customers using it, companies like Vodafone, Ericsson, Cisco Systems, et cetera, in very, very high-performance environments. What we did is we took this very mature, proven in-memory database technology and added analytic capabilities to it. There are two important collections of these. The first one is what we call the in-memory enhancements. It can cache data sets in memory. It can cache and store aggregates, so these are calculated, aggregated data.
It can also aggregate and store result sets in memory. It can store columnar data and compress it, so you can fit essentially a lot of data into TimesTen. There are two kinds. You can store raw data, store it as columns, compress it, and get great compression ratios. Things that are calculated all the time, you can cache those calculated results in memory in TimesTen. The second thing we do is also add analytic functions. This makes TimesTen used not just as an OLTP database, but as an analytic database. There are a number of them. We allow you to group things. We added capability to handle analytic functions. There's cubular modeling, et cetera, all done in TimesTen. That's part two of the software stack. Part three of the software stack is Oracle Essbase. Essbase has the ability and has also got in-memory optimizations.
Essbase has two important collections of these. The first one is to be able to load data in real time to Essbase. We call that an online cube feed of data or trickle feed of data. You can online merge cubes so that as you're doing calculations, if you want to aggregate data, you can merge cubes online and summarize within the larger cube. We also added a number of parallel capabilities to Essbase: parallel cube rebuild, parallel calculations, parallel aggregation. What that allows you to do now is to use as much as 128 cores if you've got a larger system and run all the cores to do calculations in parallel, thereby obviously speeding up how you're doing aggregation. There are similar enhancements in every other piece of our stack.
In the interest of time, I'm not going to go through, for example, what we've done with in-memory with Oracle Endeca Information Discovery. Now, let's look at how we then get how customers are deploying the solution. For operational reporting, you've got an OLTP database. People can use GoldenGate, a technology from Oracle, to do very high-speed transaction replication into Oracle TimesTen In-Memory Database and then put the hot data in TimesTen and then report against it with Oracle Business Intelligence. We address all the important requirements: very fast data refresh, very fast query performance. We can build aggregates in memory, and we give excellent user scalability. As you add users on the system, it scales very well. For query and analysis, there are two deployment models. One is for data marts, and the other is for data warehouses and larger data marts.
Example number one is there's customers who are very small data marts who can, and when I say small data marts, you're talking 500 GB to 1 TB. They can take an OLTP database, load the data into TimesTen, the raw data into TimesTen, build their aggregates and analytic functions directly in TimesTen, and use TimesTen itself as an in-memory data mart. Here, we've got a world-class, very mature, proven in-memory database with analytic functions added to support a full in-memory data mart. You can also, if you have larger data warehouses and data marts, go in with Oracle Exadata. For larger data warehouses and data marts, we typically see customers deploying Oracle Exadata or a non-Oracle system. Exalytics can work with a non-Oracle system. Imagine a customer deploys Oracle Exadata. Between Exadata, in which the data warehouse sits, we've got a very fast InfiniBand connection to TimesTen.
In TimesTen, in this deployment picture, you would store aggregates. You would store frequently processed queries, but you wouldn't pull all the data into TimesTen. The reason is obvious. You want to get great query performance, but all of the data in these systems will not fit within 1 TB of memory. This second deployment picture is what we see customers doing for data marts that are over a certain size or for enterprise data warehouses. The next one is multi-dimensional OLAP. Here, we allow you to load data from a variety of sources: data warehouses, OLTP systems, planning systems into Oracle Essbase, where the multi-dimensional processing is done in memory. Oracle Business Intelligence can be used as the front end to build dashboards against the Essbase cube. If you look at this deployment picture, we solve all the important requirements that I said you need to address.
We build cubes very fast in memory. You can rebuild the cubes online without taking down the system. We allow you to do aggregation in memory. You can do scenario modeling because we have very high-performance updates or writes within Essbase. You can scale out, meaning add users and scale out, because of the way we do parallel query on Essbase. We do parallel scale-out. In every one of these scenarios, we address all the requirements that are important for customers from a performance point of view. Planning and budgeting looks similar. You can load data from ERP applications through an ETL data load mechanism, either using Oracle's ETL tools or any third-party ETL, into Essbase, which sits as the engine underneath our Hyperion Planning and Budgeting products.
The Hyperion Planning and Budgeting runs on the same Oracle Exalytics box and gives you great performance to do planning and scenario modeling. For example, we do very fast plan updates because we can do very fast block writes in memory. We allow you to do very broad scenario modeling and forecasting because we do not layer aggregates within the system. I'll explain what that is in just a minute. We give a person doing planning a very interactive experience in doing planning because of the way we do aggregation. Again, we solve all the critical requirements that a customer would have to get planning and budgeting performance. Finally, for unstructured information discovery, we give you the ability with our Oracle Endeca Information Discovery product to run the Endeca MDEX server and the Endeca Information Discovery product on Oracle Exalytics.
How do you then use this from a high-performance point of view? There are four key requirements. You can rapidly ingest unstructured data and add structure to it. We have optimizations to do that in memory very efficiently. You can then query the system using a search or guided navigation pattern. We have parallel infrastructure within Oracle Exalytics and Oracle Endeca Information Discovery to allow you to do parallel, let's call it parallel search on the system. If information, for example, the example I was giving—social media—information changes in social media, we can also rapidly rebuild the indexes, making it high performance. Finally, we offer a broad suite of packaged applications: packaged analytic applications that give you prepackaged key performance indicators and a data warehouse, and packaged Hyperion Planning and Budgeting applications for performance management. All of them run and are certified on Oracle Exalytics.
The reason we do that is because the Oracle Exalytics product does not introduce any non-standard API. It's standard SQL, standard MDX, standard JDBC, standard ODBC. Not only do any of Oracle's packaged applications run on it, any of the hundreds of thousands of packaged applications people have built, for example, on Oracle Essbase, will run transparently in Oracle Exalytics without any modification. There are customers also who use their own reporting tools. They use the Oracle Exalytics stack as a foundation for analytic processing, even if the dashboards they build may be in a different tool. It gives you unmatched flexibility, as well as packaged functionality in adoption of this toolset. Lastly, let me talk briefly about customers and then competitive dynamics.
On customers, we're seeing obviously very quick adoption of Oracle Exalytics, both because of the demand in the customer base for analytics and because of the amazing speed the solution delivers and the cost equation. Example number one is Nykredit. This is a large mortgage provider in Denmark. They have 1,700 power analytics users, 50 TB of data. Clearly, all the data wouldn't fit within the in-memory system. They wanted super fast ad hoc query performance. In the scenarios I was pointing out, this looks like the scenario with the enterprise data warehouse. They put the enterprise data warehouse in Oracle Exadata. They ran their analytic layer within Oracle Exalytics, and they went between 35 - 70 times faster. Example number two is a packaged applications analytics example: Key Energy. This is a large oil company, 860 rigs around the world.
They have 1,500 power users, and they're using packaged analytic applications across the organization. What they found with Oracle Exalytics was it's five times faster to develop new reports. The reason is you don't have to write a perfectly tuned query. Even if you're sloppy in your SQL and you don't have an expert writing the report, Oracle Exalytics is so fast that your queries will run fast and not swamp the system. It makes it faster for them to develop. They get better performance, and they also get much easier customization. Finally, for planning and budgeting, one of the largest consumer packaged goods companies in the world wanted to build a daily plan and budget, meaning as conditions in the economy changed, they wanted to iterate their corporate plan daily, their corporate budget daily. It used to take them more than 24 hours to run that daily budget.
Clearly, since there are only 24 hours in the day, they were not happy with that. They moved their budgeting system over to Oracle Exalytics, and today, they run a global consolidated budget across all products, across all geographies, all business units in less than four hours. You're seeing real, concrete, bottom-line value from these technologies. Now, let's talk about the competition. The first thing I want to make sure people understand is there are some competitors who are positioning in-memory as a panacea for databases. In-memory, for those of us who have done database development for many years, is not a panacea for all databases. Let me explain with a few examples some of the issues people need to address and which are not easily solvable with in-memory today. Transaction processing. Obviously, in transaction processing, people do updates in transaction processing.
If power goes down, you need that data to be available. You have to persist that data on disk for high availability or durability. The second you put data on disk, it introduces two important things. First of all, that data, if that computer through which you were doing updates burned down, you need that data available to another computer. That's essentially called clustering. Not only do you need to do updates on disk, the updates need to be fast. They need to be technically correct so you don't get incorrect data in your system. You have to support clustering in order to be able to get two different computer processors to talk to the same data set. None of the in-memory databases can obviously solve transaction processing problems, period. Number two, for data warehousing, I pointed out that today the competition talks about their solution, SAP HANA.
It stores a maximum of 500 GB of data. From their own documentation, they can only use half of the total memory in the system, and they can store 1/2 TB, which is 500 GB of data. Even if you assume they get an eight times compression ratio, it means the total size of the database is a maximum of 4 TB. If you want to go greater than 1 TB of memory, you have to support NUMA. The point I'm trying to get across is people talk about the total size of the data sets you can fit in memory without understanding it's not just the total size of the data set, but also how fast you access the data from that data set. Suppose you had a 64 TB system, but you could only access the data very, very slowly. Clearly, that's not going to work.
The central thing is today, if you want to get to anything over 1 TB of memory, it requires an eight-socket system. For an eight-socket system, you have to support NUMA. The reason I point that out is outside of Oracle and one or two other companies, no one has implemented NUMA effectively in a database, and it is a very complex technical problem. Additionally, remember that there's a ton of data in any data warehouse that's there purely for archival and very infrequently accessed. In any data warehouse, there's an 80/20 rule that 20% of the data is all the data that's typically queried. Why would you pull all that data into memory and buy an expensive processor to do that?
For data marts, even if you said, I'm going to now take a system that's going to fit in memory, you can't scale out unless you've got parallel query. You can say, I'll give you a single user response time that's amazing, but if you add a large number of users, it doesn't scale well, which will be a problem. Secondly, you can do a single user response time that's great in a benchmark, but if you have multiple users and you don't have a query optimizer, you won't get predictable performance, meaning one day it's really fast. The next day, it may be very, very slow. Finally, for planning, budgeting, and multi-dimensional OLAP, even though they look like analytic solutions, they actually do updates, and you have to handle updates very efficiently in the system.
If you, for example, do in-memory compression of columnar data, and then you want to do updates on it, the competitor's solution is very slow to do updates. The reason it is slow is that their columnar compression requires you to first stage data in a row format, then load it into a column format. When you do updates, they've got to go back from the column format into the row format to do an update. Clearly, there's a lot of limitations in what in-memory technology can do, and most importantly, what the competitors' in-memory technology, which they're talking a lot about competitively with Oracle, can do today. For example, if you look at the list of requirements that I said you need to solve for transaction processing and data warehousing, Oracle TimesTen solves all of them. The competitor does not.
The competitor is a long way from solving all of them. It took us years with a world-class database development organization to build these things. You can't build and mature these things overnight. Let's look at two examples just to illustrate this and then wrap up. Oracle Exalytics, in just the six scenarios we talked about, compared to the competition. For operational reporting, as a simple example, in operational reporting, I said you typically don't aggregate data because you're looking at the grain of the individual customer. In my example, I said, find me every customer that's having an issue. You have to look at every customer record, which means it's not aggregated. It's stored in something called third normal form. Business Objects doesn't support third normal form efficiently. For data marts, they do not support parallel query. They do not support NUMA.
They do not have an in-memory query optimizer, nor do they have in-memory indexes. How do you get predictable, high-speed performance and scale out or scale up? The same issues apply in data warehouses. In multi-dimensional OLAP, they do not support in-memory aggregation. They do not support efficient writes because of the example I gave you on how they do loading of data into columns. The same issue applies for planning and budgeting. When they talk about putting SAP HANA underneath the business warehouse, there are two places they're going to do aggregation: one in the warehouse and one underneath it in SAP HANA. Having multi-level aggregates, they're definitely not going to perform efficiently. They have no solution for unstructured information processing the way we do with Oracle Endeca Information Discovery.
Finally, for packaged analytic applications, because all packaged analytic applications work against standard SQL and standard MDX as a query language, our solution works with all existing packaged applications. Frankly, if you decide you want to use Cognos as your dashboarding tool, you can run that against Oracle Exalytics. SAP HANA does not allow you to do that. Finally, cost. I wanted to give you two cost comparisons since SAP has made a statement that Oracle Exalytics requires you to buy extra hardware. Let me be frank. If you want to run an in-memory system, you're not going to run it on an off-the-shelf x86 processor with 96 GB of memory. You're going to buy a system which has 1 TB of memory, and most customers don't have one of those sitting around.
The central question is, what would it cost to buy the corresponding hardware plus software to run a 512 GB data mart or an enterprise data warehouse? Based on the list prices that both Oracle, SAP, and IBM have, in SAP's case, running on IBM hardware, HANA is five times more expensive. This is using their own pricing. It's not using a non-discounted list prices in both cases. If you went to an enterprise data warehouse, for Oracle, as we said, our solution would be an Exalytics and an Exadata, which costs $2.5 million. In HANA's case, to get 20 TB compressed or 40 TB total memory, you would have to buy 40 servers from IBM to pull all that data in memory, and it's about 50 times more expensive. This was my point.
If you have an enterprise warehouse or even a data mart where only a small amount of data is actually hot and needs to be accessed in memory and the rest of it's archived for storage in that system, why would you buy an in-memory system to store all that archived data? You would probably not do that because it just doesn't make sense financially. I hope this clarified some of the key points we wanted to make. In-memory analytics has certainly made rapid advances. We have a very broad and mature in-memory technology for transaction processing, business analytics, and unstructured information processing. Oracle Exalytics solves all of the five problems I mentioned: operational reporting, data mart, data warehouse, multi-dimensional OLAP, planning and budgeting, and unstructured information discovery. We've got real customer references with real performance benefits and cost benefits that they have seen using Exalytics.
We believe firmly that Oracle Exalytics solves this problem better than any competitor solution in the market. Finally, we're not positioning Exalytics or in-memory technology as a panacea for all databases. We do not believe. We believe that in-memory is an additional technology that helps people get more value from their databases, but is not meant as a replacement for their database. Databases continue to grow in size, and in-memory technology will provide a fast access mechanism to the data in the database, but is not meant as a replacement for the database. With that, I will take questions. Thank you.
Thank you, Thomas. Before we turn it over to Karl to start off the Q&A, I'll remind everybody that you can submit questions by clicking on Ask a Question. Karl, please go ahead.
Thank you so much, Paul and Thomas and Balaji. Thank you so much for doing this with us. I'd like to start with a question about the Exalytics uptake since its launch. Oracle mentioned on its last earnings call, as you know, that Exalytics is experiencing a faster initial ramp than either Exadata or Exalogic. Thomas, I'd like to ask you why this is. Is it because Oracle's ExaSuite sales model has matured? Is it that analytics demand is simply running very hot right now? Thank you.
I think it's because of three reasons. First, our ExaSuite sales model has matured, and customers understand what the value of these Exa products are in our engineered systems. Number two, analytics demand is running strong. Number three, because Oracle Exalytics does not require you to modify your applications or your existing analytics deployment, but can be uptaken without any changes to those, it allows us to play in that broad install base of customers we have with Hyperion Planning and Budgeting, Oracle Business Intelligence, and certainly our packaged applications customers: eBusiness Suite, PeopleSoft, et cetera. It's these three factors.
OK, that makes sense. If I could then turn to a few questions about the in-memory technology and SAP HANA in particular. As you discussed, the current DRAM capacity of Oracle Exalytics is 1 TB . It seems to me that the use cases for in-memory appliances and ultimately demand for in-memory OLTP databases would expand materially if this physical main memory limitation lifted to the point that a customer could economically put, let's say, a 5+ TB data set in memory. I know, Thomas, you mentioned that it's not just about main memory capacity. I'd love to hear your view on how this four-socket memory capacity of one terabyte might change over the next few years.
Yeah, that's a good question. You know, just to give you a data point, the system that handles the largest amount of data in memory today in the market is actually Oracle Exadata. Exadata x28 actually has a 4 TB total capacity. Actually, when you run compressed, you can run significantly larger than that. It is a NUMA-based system. Our general view is that current DRAM capacity will increase. If you look at Intel's own roadmap, they talk about going from 1 TB to 2 TB in their next generation of processor, which will be expected next year. The important thing is that it's not just a question of how much data you can put in memory, but how fast the query speed is on the data. I'll give you that.
The more data you put in memory without solving the NUMA problem, what you'll find is that the more data you put in memory, the slower your queries get. If you put 1 TB of data in memory and it's running on a four-socket, you can access it, for example, in less than a nanosecond. If you go to 8 TB in memory on an eight-socket and you haven't solved the NUMA problem, because the memory is two hops away from the processor rather than one hop away as it is on the four-socket system, what you find is even though you've got more memory there, the performance is actually much slower. It's the combination of these that need to be solved.
OK, that makes sense. Good. If I could then ask a question about the, call it, the 100% in-memory database market, you obviously have in the past described this as potentially a niche market. I know Andy recently on a call described it as more of a Ferrari. I'm curious, does Oracle plan to further leverage your TimesTen technology into a full in-memory database that could ultimately manage a large ERP application? Is that in the works?
We do have Oracle TimesTen In-Memory Database being used inside of our packaged applications for both analytics, using Oracle Exalytics as an example I just said, and secondly for OLTP. The specific case we're solving is the scenario where you need very high response time and low latency in querying an application. Our communications applications, for example, PeopleSoft, there's a number of them who embed Oracle TimesTen In-Memory Database to cache data for their OLTP purposes and then run certain calculations against Oracle TimesTen In-Memory Database. However, for packaged applications, the vast majority of them, unless it's a strict query-only application, you need persistence. As I said many times, for transaction processing, because you need durable persistence, we don't believe any only in-memory database will be able to solve that problem by itself. I hope that's clear.
Thank you. Perhaps I could touch a little bit more on the data warehouse market. As you know, SAP's recently launched BW on HANA, and they're suggesting that 1,000 or so customers could migrate off of their Oracle or DB2 databases by the end of the year. I'm wondering if you might address that risk for the investors listening in. Let me know whether I'm correct in interpreting that the combination of Exadata and Exalytics is, in fact, an alternative to customers that could be evaluating a BW on HANA.
Yes. Specifically, our alternative to customers is to put the business data warehouse on Exadata and then use Oracle Exalytics as a query processing accelerator in front of the business warehouse. We have customers who have done that. We definitely do not believe that there are three reasons we believe it's a superior solution. Number one, the business warehouse, many customers already run it on Oracle. Moving it to Exadata is a much smaller project rather than recoding all your applications that run on business warehouse to run on SAP HANA, number one. Number two, cost-wise, I gave you the example of a 25 TB warehouse. To store that in Exadata plus Oracle Exalytics is much cheaper than trying to put that entire data warehouse into SAP HANA in-memory.
It literally doesn't make any sense to try and pull all of the data that's there in the data warehouse purely for archival and infrequent query into an extremely expensive 40-server solution. Third, even if you were crazy enough to spend the money to buy 40 servers to load that 25 TB in-memory, your queries would be very inefficient because all the data that you want, your queries would have to be partitioned explicitly because your data wouldn't all fit in-memory. Anytime you wanted to look at some data from another computer, you'd have to rewrite that query to say this data is not on this particular computer. It's actually on the 40th computer in which I loaded data. That's completely re-architecting all of your applications to do that. In our view, while it's nice marketing, it's a complete practical unreality.
OK. If I could just push a little bit more on the price comparison or economics that you just highlighted, I was intrigued by the price gap, the 5x price gap between Exalytics and HANA. Maybe, Thomas, could you be clear on what are the components, the key components that drive that price gap? Is it on the software side and the ancillary apps you need to buy, or is it hardware-driven? Maybe you could help us understand the source of that gap.
OK. The pricing on the data mart, Oracle's price, list price for Oracle Exalytics hardware is $135,000. For the database software, the TimesTen piece is $690,000 along with the $135,000. The list price, if you combine the two, is $135,000 + $690,000 to get to $825,000. If you look at the comparable number from SAP HANA plus IBM, hardware is $362,000. It is about three times more expensive. The software is $3.7 million. It is about five times more expensive. It is that combination. Our hardware is cheaper and our software is cheaper.
OK, terrific. Maybe moving off of the specific Exalytics-HANA comparison, I had a couple of questions that perhaps touch on your broader BI and analytics strategy. The first is that my impression is that customer interest in the presentation layer, visualization products like Qlik and TIBCO Spotfire and Tableau seems to be very high these days. In your presentation, you covered briefly Oracle Endeca Information Discovery. I just wanted to ask for your overall view on this corner of the BI analytics market, how aggressively Oracle is going after it, and how information discovery competes with those vendors.
It's a great question. You know what this particular market segment is about is it's really about three important things. One is people who are using these tools are working against a smaller set of data, which they don't want to have a complex analytic model on top of the data set. It's essentially called a model-less development environment. Number two is they want very visual or interactive dashboarding as they kind of work and drill through data, partly because many times they're looking for the answer. It's not as in the case of a traditional BI deployment where they know the answer and they're just looking for the result or they know the question and they're looking for the answer. In this case, many times they don't even know the question. They're just sifting through information to find the pattern.
Number three is they also want to be able to combine structured and unstructured data. As a very specific example, customer service. I've got data coming back from customers saying, I'm happy about the following products. In the text feedback form, they write information about, here are the things I'm unhappy about that particular product. That unstructured information, which is captured in text, needs to be combined with structured. We've got customers like Toyota and a number of others who use Oracle Endeca Information Discovery for that purpose. Our goal is to go into that segment with Oracle Endeca Information Discovery, which addresses these three important requirements. It's a model-less environment. It's got very high interactive visualization. It supports both structured and unstructured data in a single solution.
OK, that's very helpful. Actually, your last comment there about unstructured data prompts my last question. It sounds like the combination of Oracle Endeca Information Discovery with Oracle Exalytics can be used for unstructured data analysis. Just to be clear, given that Oracle did a call a couple of weeks back on its Oracle Big Data Appliance, how does Oracle Exalytics work with the Oracle Big Data Appliance? Maybe that would be my last question.
Yeah, great question. You know, Oracle Exalytics, and particularly the Oracle Endeca layer within Exalytics—we call that the Oracle Endeca Information Discovery portion of Exalytics—it can actually be used to load data and run analysis where the data is being preprocessed within Hadoop on the Oracle Big Data Appliance. For example, I've got social media feeds coming in. I take the data from social media and persist it in the NoSQL database that's part of the Big Data Appliance. I then run what we call semantic processing to take that data and then shard it and convert it into structured data in the Hadoop solution on the Big Data Appliance. The results are loaded into Oracle Endeca, and Oracle Endeca can federate queries back to Hadoop to get that data. There are actually customers who are very interested in doing that.
Terrific. OK, with that, thank you, Thomas, so much. I'll turn it back to you, Paul.
Thanks, Karl. I will take just a few from online, Thomas.
Sure.
One would be a competitive one again regarding SAP, in this case, ERP on SAP HANA. Would they have any advantage in your view by building in an SAP application integrations with SAP HANA versus Oracle?
That's the question you should ask SAP. The general issue they have is the following. SAP HANA does not support standard SQL. If they are going to modify their applications to run particularly in SAP HANA, they're going to have to change their applications in such a way that they won't work well on Oracle, Sybase, DB2, Microsoft SQL Server, etc. Clearly, one issue they've got is Sybase looks very much like Oracle, Microsoft SQL Server, and DB2. In making modifications to run on SAP HANA, they're obviously going to disadvantage their own product. It is very unlikely that there is a scenario. SAP HANA has many limitations that I talked about. SAP is a fine company. I normally do not talk about competition in these forums. Since they have gone out of their way to raise this issue, I wanted to address it once for all.
We do not believe that any real SAP ERP customer for their transaction processing system is going to run SAP HANA anytime soon.
Great, thank you. A couple then to follow up on Oracle Exalytics. First one here is about the heuristic in-memory cache. Are the benefits of that immediately obvious? Or does it take time and experience for those to actually play out?
No, the benefits of that are very obvious. When you deploy Oracle Exalytics, typically, for example, if a customer is running a data warehouse, they have something called a query optimizer in the database. They are very aware of what queries are pulling what data on the database. It's simply a tuning thing that all DBAs are familiar with. When you go into our Oracle Business Intelligence tool, what people use is they typically run a sample set of queries through the BI layer. As part of that sampling, they turn on Summary Advisor, which is the piece that's monitoring these query patterns, and a combination of the DBAs' reports that say this is the hot data in the system and the Summary Advisor that says, here are the shape of the queries. It's a very trivial job to then say pull the aggregates into Oracle TimesTen In-Memory Database.
Most people do this, and just to give you a sense of this, it's typically done in a week.
I see. There is no delay. The benefits become available.
Correct. Most of these deployments, the customers I pointed out, we shipped our machine, the Oracle Exalytics machine, on February 15. All the examples I gave you are live customers. It's not a very long time from the time you plug in the machine, you tune the system, and go live.
Fantastic.
If I may add?
Please, Balaji, go ahead.
To the question about whether the benefits are immediate or over time, as Thomas said, there is already an existing body of tuning and optimizations that the Summary Advisor fully takes advantage of from the get-go. The beauty of it is it actually is beneficial from day one. On top of that, it also keeps learning and then seeing the patterns, and then it adds more to the existing knowledge of how to kind of do these optimizations. It just becomes exponential from that point on. It is not like you're waiting for the benefit at a later time. You're actually getting those benefits from day one, and then you're actually getting additional benefits on an exponential basis as the time goes on.
That's great. Thank you for clarifying that. I'll wrap up with one last question. It's more general. I think it perhaps would be a common question. With the performance characteristics that you've laid out and the advantages of Oracle Exalytics, would customers actually be buying multiple Oracle Exalytics machines? Or is one pretty much taking care of their needs? What do you see in that regard?
What is the business opportunity that Oracle sees with these products? I think there are two things. First of all, what do we see on the hardware side? What do we see on the software side? On the hardware side, our general model with Oracle Exalytics is a customer picks one system to initially do a trial with Exalytics. They buy it there. For example, they may put it in front of their enterprise data warehouse. They decide, I like it so much and it's so fast, I'm going to put a second one in front of my planning system. I'm going to put another one in front of my budgeting system. I'm going to put another one in front of every data mart I have.
There are typically multiple Exalytics machines purchased within a company, one in front of each of these different kinds of systems I talked about, the five different kinds of systems. On the software side, our business opportunity, we said very clearly, it's about three very simple things. We believe that Exalytics will allow us to get new customers buying our Oracle Business Intelligence products. It's new customers, number one. Number two, it is more projects within the existing customer. For example, most customers today in analytics have multiple analytic tools. Once they use Exalytics in one part of the organization, they say Oracle's business intelligence is so fast, it has such great interactive visualization capability, et cetera, that they'll deploy more projects using Oracle's BI tools. That's number two, more projects within a customer. Third, we also see more seats within a particular project.
What we mean there is traditionally, people have been nervous about analytics performance. When they say we've got 1,000 users on the system, they may say only 800 of you are actually, or 400 of you are going to be allowed to run queries against it because of performance concerns. Now, with Exalytics being so fast, they open it up for self-service for more people to run queries, which is more named users and, as a result, more revenue. It's really a combination of more hardware revenue and then more software revenue that we believe Exalytics brings to our analytics business.
Great. Fantastic wrap-up. Thank you very much. Thank you, Thomas. Thank you, Balaji. To wrap up, we'd like to thank everyone who joined us here on today's call. Also, a special thanks to Karl for leading off the Q&A and asking the questions that really are most often asked by investors. If you have any other follow-up questions, please contact IR here at Oracle. This concludes.