Micron Technology, Inc. (MU)
NASDAQ: MU · Real-Time Price · USD
746.81
+100.18 (15.49%)
At close: May 8, 2026, 4:00 PM EDT
757.35
+10.54 (1.41%)
After-hours: May 8, 2026, 7:59 PM EDT
← View all transcripts
Status Update
Dec 19, 2013
Good morning, ladies and gentlemen. My name is Huey, and I'll be your conference facilitator today. At this time, I'd like to welcome everyone to Micron Technologies Financial Technology Conference Call. All lines have been placed on mute to prevent any background noise. After the speakers' remarks, there'll be a question and answer period.
It is now my pleasure to turn the floor over to your host, Michael And Investor Relations Director Ivan Sonelson, sir, you may begin your conference.
Thanks, Hugh, and welcome to the Micron Technology, Atamata Processor Conference Call. On the call today is Paul De Lugos, Director of Atamata Processor Technology. This conference call including audio and slides is also available on Micron's website at micron.com. Our call will be approximately 60 minutes in length. There will be an audio replay call access by dialing 855-859-2056 with a confirmation code of 237 43,554.
This replay will run through December 19, 2014. A webcast replay will also be available on the company's website in December 2015. We encourage you to monitor including information on the various financial conferences that we will be attending. Please note the following Safe Harbor statement.
During the course of this meeting, we may make projections or other forward looking statements regarding future events or the future financial performance of the company and the industry. We wish to caution you that such statements are predictions and that actual events results may differ materially. We refer you to the documents the company files on a consolidated basis from time to time with the Securities And Exchange Commission. Specifically the company's most recent Form 10 k and Form 10 q. These documents contain and identify important factors that could cause the actual result for the company on a consolidated basis to differ materially from those contained in our projections or forward looking statements.
These certain factors can be found in the Investor Relations section of Micron's website. Although we believe that the expectations reflected in the forward looking statements are reasonable, we cannot guarantee future results levels of activity, performance, or achievements. We are under no duty to update any of the forward looking statements after the date of the presentation to conform these statements to actual results.
And I'll now turn the call over to Paul.
Okay. Thanks Ivan. And for those of you in the audience, Thanks for participating today. My name is Paul De Lukasch. As Ivan mentioned, Director of the Ottomata Processor Technology the processor today.
I hope to share with you not only a little bit about how the device operates. But also some of the motivation that led to the development of the automaton Processor as well as some of the target app locations and market spaces that we believe are well aligned with a automata based processing and the automata processor itself. So, moving forward, first, I'd like to, as I said, touch on the motivations that led to the development of the automata Processor. And many of you are familiar with cycles in the memory industry. I'm in particular referring to a cycle here that that has been going on as long as I've been associated with the industry some 25 years now.
And that is the cycle of ever increasing performance demands that are being placed on the physical interface between processors really of any kind and memory. Now this increase and demand for increased performance is common. And as a memory industry Micron as well as our other peer companies in the industry, we work hard to meet that demand. I would say here at this point, that another technology developed by Micron, the hybrid memory cube is perhaps the most advanced and compelling example of what we are doing as a company and as an industry to meet the demand for higher performance and to get over the memory wall as it's often referred to. However, around 2007, really when big data problems started to come into sharper focus, We took a little bit different approach with regards to our response for higher performance.
In fact, we hit the pause button for a moment and really began an effort to understand in a way perhaps that Micron and maybe the entire memory industry had not analyzed some of the factors contributing that contribute to the memory wall. That led us to go backwards in history sometime and really begin to look at some of the critical decisions that led to the computing architectures that we see today. It was about in the 1940s approximately that the relationship between memory that existed at that time and the computers that existed at that time, the processors, if you will, that the relationship was really established and through decades of evolution that fundamental relationship between memory and the processors that the memory is connected to has stayed pretty constant. For sure, the memory devices have changed become more powerful as as they were a number of years ago. What that has led to is some very by today's standard anyway, some very stable assumptions, best practices and notions about how computing systems work.
We decided to essentially challenge all of those concepts and attempt as much as possible to start with a clean slate. What that brought us to is questioning of some of the most fundamental premises that exist in the computing industry. For example, memory, it's hardly ever considered that memory may with today. We decided not to just consider memory as a storage device, but to question whether or not memory could in fact be used for other purposes. We ask many other questions that again by today's measure are taken almost for granted nobody considers.
For example, in order to get a computing system to process or analyze information, is software really required that you use a software program to implement that processing function. Here today, we think the answer to that is no, it's not necessary. Likewise what causes the memory wall? Is it always a memory problem? Or always a limitation in memory bandwidth.
Well, our understanding today would say that, no, not in all cases, Sometimes the memory wall is a manifestation of a different problem that exists somewhere else in the system. Perhaps one of the more profound questions can regular users lay people or scientists who are skilled in other disciplines like, for example, biology Is it possible that they could create machines that are able to compete with commercially developed CPUs and other kinds of processors? It seems a little far fetched to imagine that that could be the case, but we believe that is possible. And we believe that with the automata processor, these kinds of things can be enabled. Now as an introduction to the automata processor, I'll start with first what it is.
It's a programmable silicon or semiconductor device that is capable and purpose built for performing high speed analysis and comprehensive search on streams of information in particular on structured data streams, which often are the types of inputs that we deal with when faced with so called big data problems. Now at this moment, I should also express that the automata processor is really constructed on Micron's commodity DRAM process, but it's a point of confusion for many people and I want to avoid that confusion here today. The automata processor is not a storage device. It does not act as a memory device and it is not meant for short or long term storage of information. The automata processor although built on a memory the of these data streams.
Now, a little bit more into the silicon and the architecture. The automata processor as this graphic on this on this page suggest is comprised really of a two dimensional fabric of many thousands of individual processing elements. These processing elements can be individually programmed, but they operate in parallel to perform whatever functions have been defined by the user. This fabric is scalable. That is to say when we migrate our technology or shrink our process note, if you will, this fabric will scale up in size and capacity almost in the very same way that a semiconductor memory device increases its storage capacity as or is pointed at in the second bullet point here.
And that is that the automata processor has been constructed to fully exploit the very natural and very high level of parallelism that is found in Micron Semiconductor devices. And in fact, in all semiconductor memory devices is often not considered, but any modern day memory device itself is a very very highly parallel machine to put a point to that when a memory address, a row address is processed by a standard commodity DRAM device, the memory array actually returns 1000 and 1000 of bits of information. Unfortunately, because of the limitations of the interface between CPUs and memory, Not all of that information can be immediately transferred to the CPU. So it's done piecewise and in smaller chunks And because of that much of the parallelism that exists in a semiconductor memory device is actually left on the table, so to speak, or not exploited. The automaton Processor recovers the power of all of that parallelism and brings it to bear on the on on the problems at hand.
Okay. Now with regards to how the automata processor works, let me say that the concept of automata computing is certainly less familiar than the conventional methods of computing that today's scientists, engineers, and research may be familiar with. That said, it is not complex or by any means impossible to And in fact, we believe that with some effort, the automata processor will provide a breakthrough in programmer productivity. That is, programmers will be able to exploit very high levels of parallelism in ways that are is difficult to achieve today. Now, on this graphic here that we see in front of us, I would like to explain that what we're at there what looks to be a circuit of sorts is actually an automaton.
That automaton in this case was that you see there was actually compiled by Micron's software development kit and the rule itself was taken from a commercial cybersecurity prevention rule set that exists today. Now, if we look at that diagram, you'll see that there are series of what we refer to as nodes or more technically we call them state transition and elements have been connected by our software development kit to perform a very specific function. What I would like you to understand though that is that internal to each one of those state transition elements is actually a column of memory. So when I refer to the processor being based on our commodity DRAM process. This is the 1st level of association you can make.
Effectively what we have done with the automata processor is, reformed the columns of memory bits that exist in a memory array and we've repurposed them so that each column can act as an independent basic function processor of sorts. Furthermore, we provide the ability for each of these columns of memory to be connected together in unique ways and this diagram shows one connection that performs a cyber security function. And in due so, we allow users to construct very powerful machines that can perform the analytics that we desire to have performed on these on structured sets of information. Now I will say at this point that if that was the limit of what the automata processor could do. It would be interesting, but it certainly wouldn't demonstrate the power.
This automaton that you see there is only one automaton and this was meant to prevent a particular type of cyber security attack. The power of the automata processor exists in the fact that many of these automatons can be loaded into the chip. For example, an automaton of this size, we certainly can load 100 or 1000 of of similar types of automatons into the fabric. And in doing so, we enable a single automaton processor to perform many, many different kinds of analysis on the data stream. That you want to examine or inspect for in a data stream.
By loading 1000 automatons into the fabric, you can do that. And this is the nature of the parallelism that the automaton processor brings to bear on these on structured search problems and similar kinds of big data problems. Okay. A little bit about the physical attributes of the automata processor. First of all, we have fabricated the device.
We did so in our 300 millimeter facility in Manassa Virginia. Also I will point out that as is typical with a new technology or a new architecture, we elect to put this device on what is today considered more of a trailing edge process technology. In this case, a 50 nanometer commodity DRAM process. That allows us to focus on the architecture itself and not also be dealing with some of the early life issues that one deals with on a leading edge technology. Now with regards to the device itself, You see an image of it here.
And I'll point out a couple of things. First of all, it doesn't look exactly like a memory device and it although there are some similarities, for example, the fabric itself you can see is quite homogeneous. That means the processing capability is distributed evenly across the semiconductor device itself. This device happens to be center bonded. That is the activity you see in the center of the dye there.
You will also notice that it really doesn't look like a photo of a processor either. It is truly a homogeneous, as I mentioned earlier, two dimensional array of processing element. And that's why why you see some regular structures somewhat like a memory device but you don't see the large blocks a little bit about some of the specifications. What you won't see is us referring to this device like you might us to refer to a memory device, I. E, how many megabits or gigabits of capacity.
As I mentioned, the automata processor is not storage device. So we don't talk about its ability to store information rather what we talk about is the ability for the automata processor to process information. And one of the ways that we express that is in something we call path decisions. Remember that each one of those processing elements is programmed to make a decision against an input data stream. We have tens of thousands of processing elements in the fabric.
In aggregate, those processing elements are able to make 6,600,000,000,000 path decisions per second, which is a fairly large number as as it turns out and is what the basis for the performance capabilities of the Tomata Processor, where that comes from. The next bullet point, 4 watts of max TPD total power dissipation. I will say that's our estimate at the moment, but I will also say it's higher than a commodity DRAM substantially lower than many of the processing devices that are used for doing data analytics today. We do have a cache in this device. We call it a state vector cache.
That state vector cache allows us to switch streams of information very quickly on the device so that we can analyze different streams at different times without having to go through a long delay as we switch from one stream to another. Lastly here, or the second to the last point, the automata processor, because it effectively reduces so much of the data traffic that goes on between the CPU and the memory system we did not have to invent a new high speed interface to use on the automata processor. In fact, the way that you connect in a system is exactly the way that you would connect the DDR3 memory device to the system. We do have a couple of extra signals for communicating with the host processor. But by and large, the physical interface on the automata processor is a what looks for all intents and purposes to be a standard memory interface.
The last point is that Avaya is less than 150 square millimeters. And as I mentioned, it's produced on our 50 nanometer commodity DRAM process. Okay. A little bit about the applications now if we switch gears. I already mentioned the automata processor is a purpose built for analyzing large sets of unstructured information and data streams.
It is true that these streams exist all around us. It turns out that often people don't think about it so much, but unstructured streams of information are virtually everywhere around us and they are ubiquitous, frankly speaking. In fact, my spoke in voice today is really a type of unstructured data stream and can be a candidate for certain kinds of processing using the atomic processor. Now where we are focusing our attention and I would like to let everybody know that we are still very early in the development and of this technology and the applications that will be aligned with it. But where we are focusing our attention now are in the four areas that see in front of you there, network security, clearly, data communications and the analysis of locations for purposes of cybersecurity protection and other kinds of classification, very important capability.
We believe the automata process are as well aligned with that application and are doing work in that domain. Bioinformatics, although It is not necessarily what we would consider a real time performance application. It is an a exceptionally difficult computational problem that itself is based on the analysis of very unstructured streams of information, those structured streams of information in this case. Video analytics is another example of a of a stream of information, even static images can present themselves as streams of information, whether they be pixels or some post processing on pixels, deriving meaning, finding content and image and video streams of information is an important capability and we believe can be aligned well with the automata processor. And then lastly, data analytics, which is somewhat of a catchall, but really refers to the kinds of functions that go on in business intelligence the analysis of social media streams and traffic, for example, Twitter feeds and the ability to detect or predict certain kinds of social behavior and so forth.
Are examples of kinds of applications that related to data analytics that we believe the automata processor can be well aligned with. Now if we dig into these a little bit deeper and in the time that we have today, I apologize. We can't do a very deep dive. But I'm going to touch on a couple of these applications. What we see here in the domain of bioinformatics is representative of some of the work we're doing with some of the world leading researchers in this domain.
One in particular, Professor Srinivas Alaru has been working with Micron He and his research team for the past couple of years. That image that you see on the page, although the resolution is perhaps not that high. If you look closely enough, what you will see are many individual were constructed by Micron software development kit by compiling something referred to as a ProSight protein sequence database. These automata as we see them there can be loaded into the automata processor and when streams of genetic information then would presented to the automata processor, we could detect specific protein sequences and provide information to the researchers who are interested in these of things. I consider this a fairly basic application of the automata processor, but by way of reference, we can do end to end now, starting with the ProSight protein database, compile it through Micron's AP SDK and then load these of patterns into the automata processor for analysis and processing.
Another example an application, and here again, the work that we're doing with researchers, in this case, at the University of Missouri and, Professor Mikaela Beck She's one of the world's foremost experts in evaluating machines that have been designed to detect patterns and information. Here again, we see another kind of automata again compiled by Micron's toolkit for loading into the automata processor. And here again, if if you look closely enough, you will be able to make out those individual processing elements. Remember, each one is effectively a memory column and how we connect these memory columns together to perform in this case advanced analysis on data communications traffic. Here again, we took an industry standard rule set and compiled this unmodified for loading into the automata processor.
At this point, I can also introduce some of the performance estimates that we have from the benchmarks that Professor Becky had constructed. In this case, compared to a conventional microprocessor you can see that in terms of throughput that is to say how fast the processing systems can process the data stream, you'll see that the automata processor is estimated to perform at about five times the performance of a conventional processor and furthermore at about 5% of the overall power dissipation. And then finally, although we haven't established our pricing strategy for the automata processor yet. I can speak just in basic terms about the cost base of the technology. This goes back to this device being constructed on a commodity DRAM semiconductor process.
We believe from a cost base is the automata processor, will be very competitive. And so, we refer to this as the trifecta of value, if you will, and we feel we have a compelling story the benchmarks because, I want to talk for a moment about the scale of the problems that we are starting to align with the automata processes or processor. In this case, this is also research done by Professor Alaru and his team at Georgia Tech they decided to tackle a very, very hard problem. This problem comes from a class of problems that are referred to in computer science as NP hard. These are the grand challenge problems that exist in the computer industry today.
What makes these problems hard is that once they reach a certain size, the performance of the machines that are trying to solve these problems starts to drop off exponentially. And in fact, that's what we see here in the last three rows of this chart. The problems we refer to as 2510, 2611 and 3616 become increasingly hard And we see a rather remarkable characteristic here that is predicted by Professor Alaru and the researchers. And that is while the exponential drop in performance is expected for conventional computing architectures, what is was not so expected was the ability of the automata processor to maintain not perfectly linear performance but performance degradation that increased only slowly as a complexity of the problem increased. And this is a very important result because what that means is that researchers who are currently limited by compute current computing architectures with regards to the complexity of the problems that they can solve can now And so this is quite an exciting result that the researchers at Georgia Tech have been able to discover and model And we expect that this kind of behavior will apply to several other kinds of problems, not only in bioinformatics, but in fact outside of bioinformatics as well.
A little bit. I'd like to share with the audience today that the automata processor technology is more than just a semiconductor device. You should understand from this discussion that the device indeed is programmable or maybe more precisely it is configurable. It does not run software or sequential software, but when user engages the technology like Professor Alaru did. The user is actually defining a method to figure the automata processor processor to perform the functions that the user desires.
And so just like any device that can be programmed or configured, you have to have a means to do that. So in addition to developing the semiconductor architecture itself Micron has developed the software development kit or S DK that includes a compiler for users to compile programs. It includes what you see here a workbench tool so that we can provide an environment for scientists and researchers and computer engineers to develop these, these automata You'll also see there in the graphic and the lowest right, although eligible, that is an example of a program language developed by Micron. We call it animal that stands for a Tomata Network markup language. This programming language allows users to fully exploit the parallelism that's embodied in the semiconductor device itself.
And so while we do programming languages, most notably, one that's referred to as regex or PCRE. And, we can run those programs, opt in modified. We've also developed a very specialized language that lets users really fully exploit the semiconductor architecture itself. Outside of the software, Micron has also developed and you can see it here in this image in the upper left, a PCIe developed board. This development board in 2014, we will start to deliver to lead customers, partners and early adopters.
This board will allow them to do hardware development and evaluation of semiconductor technology itself. And for all practical purposes, you can think of this development board almost the same way that you might think of a GPU a rater card. It will install into a computer system in much the same way. Although the processing it performs is quite different from a GPU card the method by which you would attach it to a computer system is quite similar. In this case, this card holds to 48 automata processors, which is quite a lot of processing power.
And this is the board that Georgia Tech modeled as they did their performance benchmarks. Okay. I think that brings us to the summary. And I'm happy to take some questions. But if you allow me to say in summary, I hope that I have been able to both share with you some of the motivation, some of the thinking some of the process that we went through to conceive of this, automaton Processor Architecture and also to share with you the types of applications that we believe are well aligned the automaton processor is sitting squarely in the middle of this challenge front of our industry that is often referred to as big data and these are the kinds of problems that that will be associated with the automata processor.
I haven't said so earlier, but it uh-uh I remind myself now to to make one important distinction here. And that is, I hope that you all appreciate why we call this an automata processor, but I don't want anybody on the call today to be confused, by some misunderstanding that would suggest that the automata processor somehow can replace fully replace a conventional processor. It cannot. And a Tomatha processor is not designed to run an operating system, and in all cases where we see an automaton processor being used somewhere in the system, there will be a processor of some type that is being used to run an OS, being used to manage data traffic in the system, to handle input processors performed today. So please don't leave this presentation today thinking that the automata processor is somehow a replacement for the more traditional processors that we're familiar with today.
Okay. And lastly, before we get to the questions, To our knowledge, we're the only company that has developed this type of semiconductor architecture based on a Tomatha Computing concepts, we are looking forward to the opportunity to lead the industry into this domain of Tomatha Computing. And to that end, we have announced in partnership with the University of Virginia. The 1st center for a Tomatha Computing based in Charlottesville that we intend to be a regional center for research for both academic and and commercial industries where they can perform and continue to perform advanced research in a Tomata based computing. That concludes my remarks.
And Ivan, I'll turn it back over to you.
Yeah. Thank you, Paul. We don't have any questions in the queue right now. So we will go ahead and wrap it up. We very much appreciate your time today.
I'm just going to read a few quick statements here. I would like to thank everyone for participating on the call. If you please bear with me, I need to read our safe harbor statement. During the course of this call, we may have made forward looking statements regarding the company and the industry. These particular forward looking statements and all other statements have been made on this call that are not historical facts are subject to a number of risks and uncertainties, and actual results may differ materially.
For information on the important factors that may actual results to differ materially, please refer to our filings with the SEC, including the company's most recent 10 Q and 10 K.
Thank you, presenters. And thank you. This does conclude Micron Technologies at Amazon Processor Technology conference call. You may now all disconnect.