Arm Holdings plc (ARM)
NASDAQ: ARM · Real-Time Price · USD
203.26
-7.92 (-3.75%)
At close: May 4, 2026, 4:00 PM EDT
201.43
-1.83 (-0.90%)
After-hours: May 4, 2026, 7:59 PM EDT
← View all transcripts

Status Update

Sep 25, 2014

Hello, ladies and gentlemen. My name is Colin Alexander. I work for the segment marketing group within arm with a specific focus on the carrier infrastructure. So this is the 2nd event targeted at giving an overview of the networking market. And this presentation will specifically highlight how Arm and partners are developing products to target changes in the carrier infrastructure market. So, networking and specifically carrier infrastructure is a specific focus area for ARM. Where we wish to target many of our new silicon software and system technologies that we're developing. I think you'll have seen many announcements over the last few months, where partners have been targeting products in this particular area. Traditional networking silicon partners like Broadcom, Caviummarvell, etcetera, but also, big players like AMD and others who are initially targeting the data center market but with specific extensions and requirements for these carrier grade networks as well. So this is the second, presentation, in a series of events looking at networking and infrastructure. In these series of presentations, hopefully, we can explain what networking infrastructure is. Convey the relevance of this particular market opportunity and try and illustrate why we are aligned with the changes that are happening in the market back this up with some evidence that we are heading in the right direction to meet industry needs. So in the first presentation I gave back in June, of this year, where we really covered topics associated with introduction, to networking. We looked at the different types of equipment that were required to make up the network. We looked at the the types of tasks, the processors, had to undertake within the network. And we looked at the motivations that have led the industry to adopt ARM for new designs. And why we expected the pace of adoption to ramp as new as more and more new designs were required to meet tomorrow's architecture. In this presentation, we will take a more in-depth look at some of the macro trends and challenges that are facing the industry, in supporting the expected, data deluge. And and how we support how we hope to support the higher and higher data rates through the network and the latency targets that are required. We will look at some of the inflection points and technologies that are driving the industry, and we'll look at the broad arms strategy to accommodate some of these new designs. So in this slide, we look at some of the challenges that the network operators are facing and look at how these equate to the technical challenges that are being faced by some of our silicon partners and by the OEM supply equipment into this marketplace. So today, the network operators face a huge, increase in traffic in their networks. And this has really been driven by 2 things. The first off is in the ramp in the number of cellular subscribers that are using new cellular technologies like 4G and in the future 5G and specifically in meeting the requirements for the control signaling plane and the data plane through the network. So, with the the increase in in huge data rates, revenue per user is not increasing at the same rate. And really the network operators face 2 options, to remain profitable. The first off, they can shuffle more bits for less dollars. And look for greater efficiencies in their network. An efficiency really means cost which really means power. So we need to provide the industry with, with new silicon technologies that provide this efficiency. Now the vast majority of power is spent just running the network. So the efficient that the the efficiencies that the operators need to meet really don't meet the gap in revenue that they need to find. So the second off is to look at new business models rather than shuffling bits, they need to look at how they can deploy new services into an intelligent network, how they can introduce new techniques like network functions virtualization to get new feature velocity, and how they can deploy new services without deploying new equipment, if that's possible. So this slide, summarizes these challenges leading to deploying these new cloud technologies in a in a pictorial way. You can see in the center of the cloud network, there are a number of companies that are using the cloud to deploy their services. Companies like, for example, Google Amazon, linkedin, Facebook, etcetera. And then around the edge of the network, we can see all the different medium that are connecting into these cloud services from the mobile infrastructure, through enterprise networks broadband access networks all the way through to the emerging need for connectivity of these billions potentially billions of connected devices through the internet of things. And really what's, what what arm are driving towards is in providing higher data rates at the same time as really managing the end to end latency through these networks. Looking at how we can support higher and higher connection densities, while at the same time, ensuring that these networks are very malleable in that new software and applications technology can be deployed on these boxes So we've looked at, some of the the business and technical challenges, but how does the market segment? And what is the business opportunity for each of these different subsegments. So, looking at this foil, we can see really that we've broken the market down in to 3 subcategories. On the left, we see wireless access. So this really covers the different cellular technologies, 2g, 3g, 4g, in terms of base station technologies, carrier Wi Fi. The different, antenna systems that are required for each of these base station technologies. Wireless relays and microwave backhaul radios. Looking at the right hand side of the diagram, we look at the the core cloud and and enterprise networking, subsegment. And and really this is in the future, really driven by a lot of the the data center requirements and for server technology. But also in here, we can look at things like Ethernet switches, network attached storage, security appliances, and some of the enterprise requirements in in terms of, routers, data center network switches, etcetera. And wireless LAN access points for for the enterprise. And in in the middle, there's the the core wireless and wireline connectivity elements in terms of the the backup. And, really, we see a lot of the technology that's being developed for the the core network migrating down towards the the the wireline and wireless connectivity space. In in the center portion, there's a large need to maintain end to end latency, through the network. So there's not just a need for high performance programmable processors, but there's a need to support offload accelerators so that we can meet the throughput requirements of multiple 10 gig ethernet or 40 gig or 100 gig ethernet data rate. Now there's a question mark, what happens longer term in these segments? If we look at an example, for example, like, CRAN, cloud run, where multiple elements may collapse into a single platform, to support cellular connectivity. So you could see some of the the baseband processing elements, the evolved packet core from the core of the network and some of the storage content delivery, Ellen's all being, deployed in a single box. So, in terms of of what we know today, we've represented the equipment TAM for each sub segment. And this is derived from a number of sources. And and really so we're we're adding this up. We're somewhere over a $100,000,000,000 worth of equipment at Tam. And really that translates down to the target that we've previously disclosed of approximately $20,000,000,000 silicone tam. For this particular, overall segment. So what challenges do arm and partners face in delivering optimal system level technology for this market. I think we've recognized that we can't just, focus on Silicon and interconnect technology with our our silicon partners. For this market, we need take a broader system level view of the requirements and then address what are the particular needs for each of these subsegments. We then need to map these into specific focus areas for arm and the ecosystem moving forward. So if you look at this foil, if you look at the different market trends that that are challenging us, we see exponential data growth, in terms of subscriber traffic and also the need to be able to support connectivity of these billions of different devices, in the infrastructure. We need to look at the different operator CapEx and OpEx pressures and how we can alleviate these or help alleviate these. We need to look at the advent of new more scalable technologies that allow the the network to be more scalable like SDN and NFE. We need to look at some of the the new standards that are being developed, and will be introduced over the next 4 to 5 years like, for example, 4g, LTE Advanced, and 5g, and look at the trans the transition between each of these different technologies. We need to look at some of the financial and industry consolidation, that's occurring. The different, market players that are entering the market and we need to look at data privacy, which is really an end to end problem over the network. So we need to replicate these onto the different sub segments which we covered before. And then we need to look at, you know, each of the focus areas and map these on top of the different sub segments. So, you know, we we're we're known for our processor interconnect and IP technology that we've developed and licensed to our silicon partners. But I think we need to look at infrastructure subsystems, how do we put our, how do we, design our processor interconnect IP, how do we validate and quantify the performance. We need to, come up with a plan with our ecosystem on how to deliver on some of these new NFE and SDN Technologies, how we map some of the the new arm based, driven standards like open data plane, on top of that. And how we optimize some of the OS and and virtualization capability, on top of our of on top of our devices. And overall, we also need to look at the security aspects that we need to apply end to end, over these different technologies. So I'd like to take a look in a little bit more detail, at some of these focus areas. So first off, the processors and interconnect IP, and looking at some of the system on chip level challenges that we face. So what does this mean in reality, in terms of meeting the network infrastructure inflection points? So we we see a need for many new heterogeneous, many core, many type platforms. To address the the needs of these different access networks, whether that be cellular, passive optical networking DSL, etcetera. Also for many of the new core networking requirements, acceleration offload, some of the the data centers so that we can meet the the latency end to end requirements. Some of the new CRAN implementations where some of the baseband modules need to be combined with storage array capability for content delivery and some of the new core control functions. Many of the new cloud services, require network acceleration capability, as well. And and we need to bear in mind that the need to support IoT capabilities so that the billions of new connected devices that potentially only send 64 bytes or so, maybe once an hour. So we need to design these devices to be extremely efficient, but also to meet the throughput requirements, in the network. So in delivery towards these technologies, we need to pull together with our ecosystem partners silicon cores, interconnect memory and storage capabilities, to meet the throughput power and latency needs of these different equipment times. So as we pull together, our core intellectual property, we need to look at the requirements for the snow, the slow path, so the control, plane, and also the the fast path and and really we see a a move in the industry towards general purpose C programmable devices away from the the more hard coded network processors and ASICs that might have been used, in the past. So what do these what do these platforms have to support? If you look at the the the file that that we're showing here, we've segmented the the requirements into 4. So, if you look at the the top part of of this diagram, and this really represents the classic networking data plane packet processing requirements where typically we're handling maybe hundreds of instructions per packet before we have to stall a packet off load it to a bulk accelerator or wait for the result from an adjacent packet to be received. Typically, this needs a number of threads. It's very IO intensive, and we need to be able to handle higher packet rates with more complex processing. It's hard to parallelize. There's a big issue with with legacy and how we have to meet code that's already available, within either silicon partners or the OEMs. And the the overall problem's becoming more and more complex. If you look at the other data plane application shown on this file, max scheduling, this is extremely latency sensitive. We need real time control between the different course, on our systems. Again, multiple threads can be an advantage here. It's very compute intensive in terms of the processing that's required. And as the new cellular technologies are introduced, then more and more complexity is required for the MAC scheduling. So in terms of which maybe you should see, MAC scheduling is basically scheduling from the the base station out towards the handset. So potentially you could have thousands of different handsets connected or IoT devices each with multiple different sessions. So you may have video sessions, voice sessions, data sessions, each with different latency requirements. So this all has to be accommodated potentially over multiple different cores that have to work in parallel. And then we've got the control plane requirements for these these designs. So the control plane typically, is handling tens of thousands of instructions per packet It's highly single threaded. Basically applications can be handled with highly single threaded architectures. There's a large legacy code base, and it's quite complex processing. That's required. And again, as the new cellular technologies are introduced, we expect the control plane requirements really to to to run quite significantly. And then there's the the other element that's shown in the in the bottom right of this diagram, which basically all the specialized processing. So in terms of base stations, for example, the layer 1 air interface processing, is is pretty key. It needs a diverse set of requirements. Typically, applied in in banks of accelerators or DSPs, also into this this bucket here falls things like security offload processing, bulk security, processing. And all of these different requirements need to be supported over an interconnect. So this interconnect needs to be able to handle all the different latency requirements of these different blocks, and needs to be able to handle the the different requirements in terms of accessing internal cash and external memory requirements. So I'd like to take you a a a further look at the the sock building blocks and apply this to the the second focus area, which is basically the the subsystem requirements. And I've got 3 different, subsystems that I'd like to illustrate. The the first off is a cellular infrastructure application. So looking at, a base station implementation, I'm looking at what components need to be integrated, at the system on chip level to meet the requirements of of these different platforms. So, in in the previous presentation, the one in in June that I gave I introduced some of the the block level concepts that Arm had been working on with with some of our partners. So I introduced, the the CCN range of cash coherent interconnect, that we'd been working on. I introduced some of the the course that we had introduced. So shown here is the the Cortex A 57, which is the highest performance of our 64 bit ARMV 8 implementations. I talked a little bit about the the memory controller functions that we had on offer. So that in this case, the the DMC520 memory controllers. And I talked about the ability of the cash coherent network interconnect to be able to support different course connected in over a common interconnect. So basically allowing these different course to share the same memory interfaces, whether that be cache memory, L1, L2, or L3 internal cache, or external memory. So in the case of the CCN 508, we have the ability to manage cash coherency to the the level 3 cash. So each one of the processor cores has its own L1 cache. At a cluster level. So in in this diagram, we show quad clusters. So there's 4 cores, share an l 2 cache, And then all 16, quads, sorry, all 16 CORTEX E 57 cores share an L3 Now what we're showing in in this diagram is that, our system partners can utilize, in this case, we've shown the the the Siva XC4500 DSP vector processing course, in the same architecture. And in this case, the CCN 508 interconnect is maintaining cash coherency over the the ARM cores and also the the CEVA course. Now also what we're showing in this diagram is this particularly for small or medium sized to small cell pico or micro base stations. The the antenna processing or the digital front end processing can also be either accommodated on chip or can have off chip interfaces. On on the right of this diagram, we're showing, I'm showing some of the, requirements for the evolution in in the the base station and particularly for baseband processing. And really what we can see is that the the level of complexity is increasing dramatically. So we would expect in the region of a a 30 x increase in terms of baseband control and signaling. We'd expect in in the order of maybe 10x in term, increase in packet processing. So this is the data that's channeled back into the the core network. We'd see, for technologies like advanced in 5 g, we'd see a in the region of 10x, increase in in latency. So it becomes a very key factor, in in scheduling, user traffic out over the the antenna. Certainly, we see the need for 64 bit processing in terms of increased memory range. And also we'd see potentially the need for even more control processors on these sorts of devices to handle things like local content caching and, application processing. On on the base station itself. And and finally, on this file, the subsystem approach So we need to be able to quantify results on this type of architecture, in terms of the control plane and and the data plane elements and how that all interacts with the the layer 1 subsystem and any accelerator blocks that we would have on these among chip technologies. So the second example of a subsystem and the sock building blocks required for that subsystem is, CRAN, cloud run. So I've picked this example, to illustrate how multiple different elements can be extended on, a similar system on chip architecture. To the previous example of the base station. So in in in this example, we see, an extra 16, CORTEX E 57s being integrated. So we go from 16 up to 32 core And really this shows consolidated compute for server processing. And potentially for something like Mac scheduling as well. So potentially on this device, these these cores be used for a range of different functions from control processing for the baseband capability. Or for some, content delivery, capabilities where data may be cached locally to this particular node. Once again, I've I've chosen, CEVA XC45100. So we've we've just written a white paper, with CEVA, which looks at some of these, applications. So I've reused the diagram. Typically this could be CEVA cores or it could be custom vector processing engines that are used in this type of architecture. But, again, the the CCN interconnect is showing that coherency can be managed at multiple different cash levels either L1, L2, or or L3 in this case. We show an extra, two channels of of memory being integrated. And in this case, we're showing, some cortex a series. So smaller core is being used in this case for some, maybe, some packet processing, or some user interface scheduling, processing. So these could be CORTX E53 cores. This could be some other sort of smaller cores with some appropriate accelerators also being used, on the the the CCN fabric, as well. So this is just a representation of a different subsystem, a higher processing capability, more functions being integrated on a on on a sock. But from a subsystem viewpoint, we're we're looking at this type of architecture to validate the the performance, of these type of system on chip devices. And the third example of a sock, architecture would be for, for example, infrastructure and analytics or media processing. So, again, scaling the the architecture using multiple clusters of cord cortex a course, for the the server processing. Potentially, this could be a a data center, device using interconnect with vector engines, so again, we have I've chosen Siva in this case could be a proprietary vector engines. All utilizing the the CCN interconnect to manage the L3 coherency. And what we did visage in this case would be that the vector engines were maybe doing some codec task. It could be doing some voice or video, coding. It may be processing some analytics tasks. And the the cortex e course would be handling some maybe compute server functions could be handling some, again, some scheduling capabilities. And typically, these would be using some virtualization capabilities as well. So again, the idea here with a subsystem approach would be to validate some of the functionality and quantify some of the performance So I'd like to choose this foil to, illustrate how our partners have to be able to target the appropriate functions in the appropriate system on chip devices. And then be able to allow the the OEM equipment manufacturers to package these for appropriate geographies. And I've chosen the radio access network, as an example. I think this is a a good example that it illustrates how arm and our partners need to consider the processing software and system requirements for many of these new designs. So, you know, CRAN in in some geographies makes a lot of business sense in in the in the Far East where we have lots of massive urban conurbations of 10, 20,000,000 subscribers. And fiber is owned by the same operator that that manages the the wireless network, CRAN makes a lot of sense. But in many of the other geographies where the operators don't have access or such readily access to to fiber technology, it may not make sense to to centralize a lot of the the the the RAM capability, in a single box. So this really shows why we need to between different locations, and and adapt to the the network case and the best option for each network case. Using off the shelf hardware and software may help this to to to be a a reality using virtualized capabilities and some of the NFE and SDN techniques. But the solution must be cost effective, you know, when when when compared with specialized equipment, that's used today. And and really, there's no single solution that can be considered optimal for all these different scenarios. So if you look at the the, the the slide and the picture that we're showing here, really there are 22 main options for RAM deployments moving forward. You know, first off is, you know, the the E node be the base station functionality Should that be centralized or distributed? So there's the CRAN like case where a lot of the baseband capability is is centralized and what new antenna schemes should be used. And then looking at the the core network So the evolved packet core functionality, again, can be located centrally or could be distributed. And these architectures use a lot of carrier clouds and and server, type capabilities with some acceleration offload. So the main question here is, you know, where are the functions located? Are they located centrally? Are they distributed? And the main options are, cloud run, where E node B functions are centralized in a massive box, where IT data center technologies reused. The other option is to use heterogeneous based architectures for, small cell base stations, small cell base stations are overlaid on top of the micro network. And then there's the base station hotel concept, which really covers everything in between. And and and one of the the trucks arguments and decisions that has to be made in the network is where is the the Maclayer processing populated, and and where does the the layer 1 processing reside you know, in in many cases, the the layer 1 processing actually has to be located much closer to the antenna, the remote radiohead. So this diagram illustrates how functions may be moved from different boxes across the network depending on the the the the network, requirement, whether we be in the Far East, whether it be in Europe, North America, wherever we be. And again, this has got a a a large bearing on the type of system technologies software technologies, and and silicon technologies that are developed for each one of these different applications. So another example that shows how our partners are having to adopt to target the appropriate functions into the appropriate sub devices is shown here in terms of the the network infrastructure, evolution. So historically, the the data center core networking equipment and the access network equipment have had a a degree of intelligence in terms of of processing storage capabilities for content delivery acceleration and networking. But the backhaul equipment has been fixed function So the, effectively, the, the, the, the, the traffic is rooted across that network without much intelligence being applied to the packets as they transition. Tomorrow's cloud infrastructure sees a a convergence of access and core network technology, excuse me, where intelligence is distributed in the network. So processing, storage, acceleration may actually be accommodated in the gateways and switches in the network. In in this case, processing is moved closer to the the client devices. So that there's an argument that this actually reduces network latency, the the the, the choices that are made on on routing a packet can be made much closer to the is scalable deployment of technologies, across the network. It allows new technologies like, for example, software defined networking and NFE, to be introduced, and features to be used more flexibly from within the OEM hardware. So the operators have direct access to configure their network. Once it's in place. And from an arm perspective, we're enabling these solutions with much more scalable, on V Eight cortex V Eight based, course, with interconnect You know, so we're we're provisioning much more cost effective, efficient designs for this intelligent flexible cloud. So we've looked, already at the silicon IP and subsystems requirements for the networking market. Now switching and looking at the software requirements, and the need for Arm to work with the ecosystem partners to try and bring some standardization, to, the requirements of of networking. The the file that we're showing here, really illustrates the the 3 different market sector requirements. So, if we if you look at, access market, backhaul, and core, really we see 2 different, needs emerging. On the left hand side of this diagram, we see a a typical server application Now now, really, the the server platforms are are reasonably easy to enumerate. They've got really standardized CPU capabilities. And the number of peripheral functions on a a data center of server application are are reasonably limited. So we need to configure at boot time things like the generic interrupt controller, any timers you are on the board. And really, it can be enumerated into, a standardized API. So Arm had been working with, a range of of our partners, including the the the main OS vendors. To specify something called SPSC, which is the server based system, architecture. And this sits below the OS or hypervisor with the server application running on top. Now if you look at the the requirements for the the networking market, you see that the the range of functions on a on a networking blade typically is a lot wider than than in server applications. So it's much more difficult to, come up with a standardized interface to, configure functions like, for example, the scheduler, any buffer management that's required into memory, any encryption, or or crypto type features, or any networking, IO capability or any other particular offload for that for that matter. And and what we see in this case is that, typically our, silicon partners are coming up with a series of proprietary drivers. What, arm and the ecosystem have done is we've come up with, an additional API that sits on top of that to allow our partners to configure this network and blade. But our silicon partners to layer on a proprietary driver underneath the CPI that configures their potentially proprietary hardware underneath that that that relies on their specific hardware. Looking at the software approach in a little more detail, this file represents an example of how arm and partners are working to enable the market with a series of proof of concepts for network functions, virtualization capabilities. So, Arm have worked with several silicon partners like Evago and AMD. We several software partners like Tieto and Harrison, utilizing core software capability that has been developed under the Lanaro umbrella. To develop a number of POCs that have that have been submitted with network operator sponsorship to the Etsy NFE community. The intention of each of these pucks is that certain network functions are configured to run on ARM based silicon from our partners. And then the performance and functional requirements are tested and then documented and published on the Etsy NFV portal. For all the interested parties to use. So examples that have been submitted to date include service chaining, and virtual evolved packet core applications. And each of these require a mix of compute and offload capabilities and all have been configured to run virtual functions, under Linux with the silicon chips con configured appropriately. So hopefully now after taking time to listen to this presentation, you've got more of an understanding of the complexities and often conflicting technical and business challenges that are facing the industry today. That are playing out in the market, including the introduction of new radio access technologies and how these might be be deployed in the radio access network. We've looked at how different, delivery mechanisms may be supported for content delivery. Whether these be hosted in the cloud or hosted much closer to the edge of the network. And we've looked at how these new services can be deployed in the core of the network by decoupling hardware and services Whether they be hosted on overlay networks or on dedicated boxes that are owned by the operator, whether they use dedicated implementations, network processing techniques, ASICs, etcetera, or whether they utilize more malleable NFV ready equipment. So looking at this foil, really what we've done is summarized, arms approach to the market and looked at the specific requirements for the access market backhaul wireless wireline. And the cloud enterprise networking and core markets. So you can see that for the the access market, there's a need to target, you know, very specific functionality, target the specific cost power and form factor constrained requirements for these dedicated pieces of equipment such as base stations, past optical networking nodes, etcetera. So these platforms use mainly heterogeneous architectures mixes of different cores, different sizes to target the control plane, data plane requirements, and potentially any additional accelerator IP that's needed. So that there's a need for arm and partners to support not only the processor and interconnect IP, but to supplement the the market with a series of platforms that can be used for performance evaluation, approving efficiency, etcetera. The other 2 elements, the the clouds, enterprise, and core networking, is very horizontally aligned. So basically, this is, looking at data center and server based technology that's feeding into this right hand box. It's got a a high reliance on these new software, technologies, SDN, and NFV, and potentially can run on top of commodity compute platforms, making use of things like the SBSE that we talked off earlier. The element in the middle really aligns many of the requirements of the access market in, meeting the, latency requirements across end to end in the network, together with the elements from the the right hand box this this commodity compute capability. So in conclusion, Why is the industry moving towards adoption of ARM based technology? And why does the industry care whether it uses technology from ARM? Rather than competitive offerings. So I've listed here 5 main reasons why I think this is the case. Going through these one by one, really armament partners are targeting heterogeneous architectures. Mixing small and large cores to meet the specific needs of access, backhaul, and core. And really targeting the cost, power, and latency requirements of each We're offering a single processor and instruction set architecture that can stretch across multiple different network elements And leading into the 3rd point, this is available through a number of our different partners leading to increased choice So there are multiple different providers of arm based chips for this particular market, Arm are enabling the market with, a very flexible and rich software environment. So we're working with our ecosystem partner partners, with many of the open source initiatives. So we've we've we've talked about a few in this presentation, like NFV STN, Lynaro, etcetera. And we're investing a lot in the ARM Linux, open source community. So we believe that the choice of ARM based of designs offers networking OEMs, lower total cost of of ownership, reuse of platforms and extending capability with software programmability over a range of their different equipment types So I think that these five points considered together with a strong roadmap that it tends to continue the development of products for networking equipment, all combined to present ARM based technology as a very attractive, proposition for OEMs and network operators alike. So I'd like to take this opportunity to thank you for listening to this presentation. Hopefully, you found it useful. And I'd just like to conclude there. Thank you. You're now in the in the mailroom. You may begin. Okay. Thank you, operator. Thank you, everybody, for joining. Good morning, good afternoon, and good evening, everyone. This is Ian Thornton. I'm the head of Investor Relations at Arm. And so welcome to this call where we will be Arms opportunity in Enterprise Networking. This is actually part 2 of the discussion. Part 1 is available on our Investor Relations website at www.arm.com/ir. Hopefully, you will also have had time to view the presentation by Colin Alexander, which gives some more background to this discussion. If you have not, then that is also available on our website. So on this call today, we're joined by, Charlene Marini, who is the VP of Segment Marketing at Arm. And choose responsible for our strategy and enterprise networking. And also we have Pierre Saragu, who we the Global Telecom's Equipment And European Semiconductor Analyst for Bernstein. And I'll now hand over to Pierre to lead the discussion. Revenue, Pierre. Thank you, Jan, and thank you, trying to offer me the opportunity to have this time. This time with you. And thank you for the presentation. Colleen made available on your website. I thought it was extremely extremely useful. And I'd like to kick off this Q And A with And first of all, if you could take us through, in terms of time, how would like the networking initiatives develop that arm? When did you get started? What were the first of the first kind of products you've been targeting, when did you sign your first licensing agreements? Thanks, Pierre. Happy to have the discussion here today. So to answer your question, of course, everything within this space has been an evolution. And the chips that are shipping today are our Cortex A9 and Cortex A15. A high silicon, which is part of Huawei made one of the first announcements that they were going to use CORTEX A15 in base station equipment, in 2011, of August of 2011 to be specific. And then shortly thereafter, there were announcements from other partners like LSI and TI that they would use CORTEXA 15 in networking equipment as well. So 2011 was really kind of that beginning in terms of this part of the market in networking. And looking forward, of course, the main opportunity is going to be for the Arm V Eight Processors, which enable 64 bit with the arm instruction set. And we started working on that in about 2007. We started licensing that architecture in in 2009. And we saw first announcements of partners, showing their intent to use R And D Networking in 2013 and Broadcom was the first one to make that announcement. So Since then, we've seen continued momentum with other partners like High Silicon, Prescale, Alterra Xilinx, and others announcing their intent to use RMV8 Processors for networking. So as we kind of go through, that you can also look at some of the partners in addressing the space. And as we go through this discussion, I think we'll see that a lot of the attributes in the server space, are being applied to to networking at all as well. So partners like Applied Micro, AMD and Cavium, while their chips are focused on server, we'll see that they can also be used for some networking applications given the types of IP and market engagements they have. Okay. And so if we look at a snapshot of the situation today, what is already shipping and how does like the ramp up in terms of shipments, look like for the next 12 months. So, let me maybe Well, first of all, what can you tell us about specific products that are shipping you think in volumes already today and what do you think ships in volume 12 months from now, I think that would be my first question. And then I have a quick follow-up. Certainly. So we see CORTEX A-nine and CORTEX shipping and volume and things like intelligent switches, today, Wi Fi access points and, certainly in things like cable or DSL modem, So on the access side of wireline network, of course, in terms of wireless access, with base stations, that's just beginning to ramp. And so we'll see more and more, of course, with the RMV-eight platforms coming out in 2015 and shipping in 2015 that, that pipeline will will continue to to deepen. Okay. Okay, that's very clear. And so if Now like a follow-up question on where, like a snapshot on where we stand today, could you give us like a very rough idea how much of the market have you licensed already? I started looking at it and I kind of came to the conclusion that virtually everybody doing networking equipment has already at least taken a license from you guys. Is that, is that a fair assessment? That is a fair assessment. So the past 2 years, I think we've seen, virtually every networking equipped silicon maker and certainly all of the major, networking silicon providers taking and announcing arm licenses. And, of course, when In Q Tel completes the acquisition of the Axia product line for robovago, they may also be shipping our base chips. Okay. Okay. And then in terms of In terms of the trajectory of, arm getting into actual ship silicon, how do we, how can we think about that? So it's a fairly long product cycle industry, I would assume. So that's one thing, like getting on into into the product cycle may, may take a few years. And then you should look at, your most advanced licensees how do you see their approach? Is that like a ground 0 kind of approach? We just forget about all our other processor architecture and migrate all our product line to, all our product lines to arm? Or is it actually a more progressive kind of penetration? Yes. So as you state, The product cycles are longer in networking than, say, in mobile. And they'll vary, by the space as well. So for instance, right now, we're seeing, quite a momentum around new wifi, 802.11ac standards and so replacement cycle going on there. And of course, with SDN and other trends in the enterprise side, we're seeing maybe a quicker uptake of new designs there. So there will be pockets, where the market will evolve faster and the replacement cycle will take place faster. And so I think those are areas that that you'll see, arm based designs, also because those tend to be areas of newer software. Where legacy is not an issue. And then you have, as you mentioned, portions of the market that have been based on other architectures for a number of years. And so we're working with the ecosystem to ensure a smooth transition And and really what's what's happening is, a lot of the OEMs and and carriers do not want to maintain multiple co bases. So it's really in the interest of our silicon partners to transition their product lines as quickly as they can. To the newer arm architecture such that OEMs can meet their goals to simplify their product development, and and their maintenance support. Okay. Very clear. That's very clear. So you touched on the ecosystem migration and I actually remember like a chat with with one of your licenses in the space. I think it was with free scale. And they were actually telling me that how challenging it is actually for them to migrate a product line to ARM and that once you do that, all the software that is being developed by themselves, software that has been developed by their clients needs to be actually re compiled or adjusted and that's the magnitude of that were quite probably more than what they had initially anticipated. And so that the visibility on how is it going to be, when it is going to be completed basically, was quite challenging. So they, of course, absolutely didn't see that as a showstopper or something that could challenge that decision to migrate to harm. But it seems that in like moving ecosystem, moving legacy is, is fairly challenging. So would you, would you agree with that assessment? Is that what you see at the moment in the market? Or do you think maybe the ARPAKES are specific players where this is challenging, but you see other places where it's much easier. I think you say that in Wi Fi enterprise, it's easier because there is less legacy. So maybe if you could tell us where you think it's going to be the most challenging and how it's going to impact the ramp up of your penetration in the space? Yes. I think whenever a new architecture enters a new market that there is of course the transition of an ecosystem. And so that's quite natural. I think there are a few dynamics working in our favor, in addition to the OEM consolidation around their platforms and so wanting to minimize the number and different types of code bases they have. Certainly, another one is the transition to 64 bit as Colin mentioned, in the web, cast that that was done, significant pressures on networks in terms of bandwidth today, both from the cloud side on the enterprise, but also on the the edge side from subscriber usage. So, and with that, we are seeing kind of these replacement cycles, as you mentioned, and new ways of of structuring the software across networks. And so and then the increased need for 64 bit, to my previous point. And so if you look where the market is, it's having to transition in terms of software code base throughout most of the market. Just because of these trends, and starting to rely more and more on open source and some of the other things that Colin talked about. And certainly there will be pockets, where, everyone who designed the saw where it has been gone for 20 years and and no one knows what the the software does. But there are, technologies to enable that in in terms of, binary translation and, and, compile technologies. So, I think we have everything the ecosystem needs. To transition and we certainly accounted for kind of that phasing of transition and replacement cycles in our forecast. Okay. Maybe it's a good it's a good point in time, maybe to remind us how, what sort of guidance, like long term guidance you've given on your expectations I think you estimate of your addressable market in, in networking in the high teens close to EUR 20,000,000,000. And you've given some kind of like 25% to 30% penetration targets. Can you remind us on sort of time horizon do you feel comfortable you can get there and maybe, how much is ecosystem transition dynamics could actually happen faster. Maybe there is like an upside risk to do that number or how much it could be delayed if ecosystem from is taking longer than what you're expecting today. We feel that in 2018, we've given a 25% to 35% target penetration number. So across this broad range of networking. And in terms of your question on ecosystem evolution and, and, the assumptions in those numbers. This is certainly, on the wireless access side, mobile infrastructure side, we we're seeing the ecosystem for for base station software, such a move over. And so we feel good about that ecosystem and we think and wireline networking, a lot of this is moving to Linux. And so optimized carrier grade Linux is of use in that market and will continue to grow in that market. So the adoption of Linux in that market is, is something driving this market penetration. And then in the enterprise networking space, that, that target number is, based on the fact that, again, moved to open source software And also things like, needing higher levels of orchestration and optimization at at higher levels in the networking stack. And so, we see things like Open Daylight, Open V switch, a lot of these, forums and open source platforms evolving to address that market. Okay. That's very clear. Thank you. So maybe one last question on just making sure we set the lay of the land and we have all, a full picture on what this networking opportunity looks like. So If I think about this $18,000,000,000, that's the number I have in mind, or maybe you communicated $20,000,000,000 or anyway, I don't think it's we are at a stage where this exact number matters a lot. I've tried to slice it the way we analysts like to do it all the time. And I thought it was actually a very, very varied I would say, markets. So you have a lot of sub segments. You have processes in there. You have FPGAs. You have the very, very specialized chips, you have a lot of players. It's a very fragmented market. So if you take Broadcom and Texas and they are probably both 10% market share or close to 10% market share for success instrument. But then all players tend to evolve between the $300,000,000 $800,000,000 kind of revenue in networking. As I couldn't figure out how to segment properly this market, I'm actually returning the question to you. How do you guys think about segmenting the market for you. And maybe the hint I would give you is an easy way for us, analyst and industry observers to think about it would be in networking, you can have small chip with just one processor. You can have a very big chip with a lot of processors, with a lot of parallel computing. And then you also have big chips with just one small processor or a small number of processors and a lot of other things, with a specialized like for instance, SBGAs or the arrays of DSPs, etcetera, etcetera. So if I think about these 3 categories of, of chips, first of all, is that a relevant way to to think about it, for you guys? And second, what would be your rough idea of how the market splits between these three? Of course, how it's going to evolve over time? Because as you mentioned, things are changing a lot in the industry. A very complex question, and I'll try not to make the answer, ex, in that, as you say, the most important thing here is this is a very diverse market. And so I think your breakout is fair. I think I'd add some context, around that. I think you said low ASP single processor. I think moving forward, there'll be very few single processor designs. I think dual core or and maybe even quad core could be the norm even for what you might consider lower performance, lower ASP designs. Because of these trends we're seeing. So something just like a Wi Fi access point that seems, quite simple once you get into the enterprise space, once you get into increasingly, maybe even carrier access space with the combination of cellular technologies and Wi Fi handoff, a lot more control processing, is taking place. So that's driving up the complexity there. And in terms of the other end of the spectrum, you could have lower processor density on bigger chips. As you mentioned, this might be a switch fabric where the switch is taking up billions of transition the switch fabric is taking up billions of transition or, excuse me, and the processors are a small portion of that chip. And then, of course, chips, that have, tens, maybe even hundreds of cores, And those types of designs certainly increasing in terms of the penetration across this spectrum. And so I think as you know, our business model is highly scalable. We look to address this entire market in kind of general terms, the simple side that, that first tend to have a lower royalty per chip because they have fewer processors, and the processors might not be as complex. As we get to the chips that have more processors, have 64 bit processors, those those chips will tend to have higher royalty, and and they'll tend to have higher ASPs, as you pointed out, underlying that that higher royalty. So we do look at it across the spectrum, and can break it out in those general categories. But in general, we do see that the processing requirements are listing across that spectrum even from the low end, moving up. Okay. And if you think about, this very high processing type of architecture versus, like, architectures where you have a much lower processor count. Do you think the latter is going to actually let more and more room for the high processor count? Because that's where the market is going? Do you see, like, places where you would have, like, just one processor and a gigantic FPG array being replaced by actually application specific processors, application specific chips with a lot of processors. How do you see that evolving? Yes. That is a trend in some places, not necessarily the F PGA replacement, but, a simple chip being replaced by, maybe a more complex chip. And a lot of that's doing due to integration. So in many cases, it's actually multiple chips being replaced by, a single that will integrate more of the intelligence onto a single dye, or a multi dye package. So We will see see more of that and we are seeing that today. And then at the same time, we're seeing intelligence being put into places that previously there was no intelligence. So switches are a good example. Traditionally, they've been, you know, no processing capability. There are now places where switches increasingly have processing capability trends like SDN, which require the termination of some protocols on the switch itself, add more complexity in the need for processing, in some switch fabric. So we're seeing both higher integration of existing platforms and a base station, baseband platform is a good example of that. But we're also seeing intelligence now popping up in in other parts of the network and so processing capability being added there. Okay. So more processing capabilities, more row processing volume and more integration would be like the 2 trends? Okay. Very vague. Thank you. And one thing just like jumping on from my memory. I remember reading stuff about I wonder if it's not Microsoft looking into that, but we are looking at actually a kind of opposite movements where they would actually use, especially for search, processes, combine with SPGA's and kind of like non processing logic to accelerate the processing of their search engines. So I don't know how much this is already tangible and like a trend that could emerge in, on the server side, on the pure processing side, But I was wondering if in networking, this is also something you could see like places where the very high level of compute requirements could, could see emerging actually alternatives to pure role processing power and being replaced by processes a mixed up with faster, more dedicated type of type of elements like an FPGA Yeah, Pierre, you bring out a really good point. And certainly the underpinnings of networking, have always relied on acceleration. And the architectures, that have evolved because of where technology has been and etcetera has been kind of the separation of that compute part of the code of the equipment, with the acceleration technology and scheduling and and other things that need to happen in networking for latency and throughput reasons. And with this increasing integration, what's happening is those are being combined onto single chips. So whether that is FPGAs that are adding compute subsystems, as Alterra and Xilinx have done. To address certain parts of the market that still need a lot of flexibility in the acceleration portion. So they want to be able to have a fast path in the FPGA logic, that they can program. Or whether it be, a chip that has a combination, of the general purpose arm processors, but also some more programmable, types of processing and accelerators with a software development kit on top that an end user can then program in a generic way but not as generic as say the general purpose processors. And so enabling, again, that fast throughput and lower latency than you would get just using a high single threaded general purpose processor. And we definitely see the market move more and more towards that, and some of the initiatives that Colin mentioned in the presentation, are really enabling, for that, one primary one is an API called open data plane. And so this is all about abstracting that logic I mentioned, that kind of acceleration, fast path logic out. And so now OEM have a common interface to program that type of logic, whether it is a chip from vendor X or a chip from vendor Y, and they won't need to pay attention to what the specific hardware components are on that chip. Okay. That's very clear. So lots of things could move from where we stand today. Now if we maybe if we move on to, like the second batch of questions I had in mind, which we are more about, how you see the market structure evolving over time. So if I try and take a snapshot of where we stand today, We know that Intel with, like, maybe 30% of, like, the processor, like, discrete processor, market and networking maybe has like between 5% and clearly less than 10% share of this 18,000,000,000 kind of kind of market. So the footprint of Intel is actually very of X86 is actually relatively small. Then I think MiPS and PowerPC are kind of like the other legacy architectures we find in in networking. So is that fair to say that in the long run, it's going to be arm and intel, and that's it. Other architectures are likely to disappear over time based on the comments you made earlier about your clients not being willing to maintain multiple architectures? I think it's fair to say that there's a consolidation of architectures, as you mentioned. And, given the licensing momentum we've seen in the announcements of power PC and MIPs, silicon users, transitioning to arm that the the market is moving to 2 architectures, as you mentioned. Okay. And so is it going to be, Intel keeping its kind of 5% to 7% footprint. I just estimated and you guys taking the 93% to 95% remaining footprint? Or do you think Intel has also an opportunity to increase to increase its presence in networking. If we listen to what the company says, if we look at what the company is doing, they seem to be fairly, fairly active in the space. So how do you read their behavior at the moment? I would step back and say there's a huge opportunity in networking, for silicon providers today for all the reasons we've mentioned, a great amount of innovation and investment going on in in the market. And, you know, so, so clearly, everyone is trying to meet the needs of this market and address this market. You know, we generally view the types of, discrete processors that you mentioned as, the primary market for X86 today as being in points in the network that are most like a a server in in terms of the processing capabilities. And the ARM ecosystem, providing the operational parts of the network that really require the high throughput and lower latency, which is most, you know, the highest proportion of the types of workload and processing that go on. So We think our ecosystem is very well poised in terms of engagements with the end users. The existing ecosystem and certainly the ecosystem that's transitioning. And of course, my team is working very hard on what we feel the future ecosystem will need to be around the software defined, types of networking technology. So, I think Intel and X86, they'll need to make similar types of investments and look where the market going. And it's hard to say, you know, how successful any one player will be as as they address this kind of rapidly evolving market. Okay. So Let me maybe, give you the kind of, like, simplified long term perspective with which I like to think about that question. Now with this whole SDN story, everybody would be tempted to say almost my kids would know what the data plane is and what the control plane is. So you have this idea that the data plane components have to basically do the heavy lifting and forwarding at a very high speed packets of information through the network. And then you have the controlled lanes that is basically trying to think through the process and give instruction to, to these data plans components. So that's kind of where we want to see the network evolving. So a controlled lane that has a very open and normalized and codified and I would say, open source kind of communicating with a data plane And I think the next step of the thinking would be today any sort of processing is being run on the cloud. We see every single equipment vendor offering their core product like an evolved packet core or even the edge product like edge routers actually running on the core or running on on the cloud running from like dispensed data center is the most amazing example of that we see that like the first networking I would say components that are going to run on the cloud environment are going to be set up boxes. So like the set top box you have at home that is from which you get all your services that are going to become like a pure packet forwarding kind of bugs and then all the intelligence as of the service you're going to be delivered is going to be coming from a cloud somewhere in the country, if not somewhere else on the on the planet. So if I think of about evolution, the evolution of networks along these lines, all the control plane is going to end up in the cloud and all the packet forwarding, all the data plan is going to to stick to the books. And then maybe that's the way to think about how the world is to split between Intel and ARM because Intel will have the upper hand in a cloud environment in the large data center environment and ARM will have like the upper hand, on the high integrated data plan chips. So what would be your direction to that, like, kind of conceptual vision of the world. Is that the right direction? Is that where we are going? Or am I missing something Well, I think when people talk about, cloud and I'm glad you bring up the set top box one because that's when we're actively working on And so I'll kind of draw that into this answer. They think of, as you say, a massive large data center somewhere very far away. And then you're thinking of the endpoint. In reality, what we're seeing is instead of that intelligent endpoint and the intelligent cloud, and a dumb network in between, we're actually starting to see a more intelligent network And then on top of that, what I call disperse cloud. And so in in things like, set top box, and kind of cloud, we're actually seeing more experiments, where cloud based systems are sitting in the network. For for the set top box offload and people looking at workload optimized, types of deployment. So they're not looking at, what has been kind of the traditional legacy server of a you know, high single thread general purpose processor, they're actually looking at, systems that are SoC based that allow workload optimization for very high density, and over the lifespan then, very low or I should say, lower operating costs. And so our view is that you're actually going to see more of this intelligent network, think of the cloud as actually First cloud. And with that, both of those elements looking at workload optimization and flexibility. So, optimized for certain types of tasks that have the orchestration and the software such that you can evolve services very quickly to change services and deploy new services very quickly, but still with an optimized underlying infrastructure. Okay. That's very interesting. I actually wish my good friend, John Chambers, were on the call with us today because he's been trained to convince and to evangelize as a planet that his concept of FOG Computing is this idea that you would need compute and processing power and intelligence at every level of the network. Is that just like the very centralized cloud? And no intelligence in the rest of the network is not the best option. So is that fair to say that you would agree with that view. And you may think whatever you want about the wording for computing, but this idea that computer is going to be actually very visited in the network is the future? Yes, I think we're aligned with that view. I mean, and we're obviously starting to see parts of that, today. And certainly as you look at IoT and the discussions that are happening around IoT, and the different types of requirements for control of 1,000,000,000 of nodes, the tendency is to really see how we can solve some of those problems in the edge of the network so that you're not creating more and more pressures back at the data center, in highly condensed kind of cloud environment. Okay. So there was one way to think about it, which is the cloud environment, versus distributed, the idea that all processing would go to the cloud or the control plane could go to the cloud And only the data plan would stick to the distributed network is probably not the right way to think about it. Then there is another like conceptual divide in the market, which should be, and I think like some of the call in flight touched very well on that. There are places where you need super high power, I mean, super high performance, single threaded processing and there are places where actually you need a lot of style, and multi threaded processing and where you need also, I think a lot of, a lot of integration. So if I recall one off cutting slide, MAC controlling is a place where high processing signal surveillance is important. The control plane, of course, is where row processing and single freight processing is very important. And then I think you have the data plane where integration and multi multifredded, highly parallel architectures is probably what matters most. And there was a false book on books on that side, but I have to admit, I can't remember what the 4th books was. So the way I think about it is where Intel without making any comment on who wins were that they are not thinking about it on a relative basis between Arm and Intel, but more thinking about where Intel is more at ease and when armed is more at ease, is that fair to say that Intel will feel more comfortable where this like very high performance requirement is, which would be the control plan and maybe max scheduling and then place it high parallel architectures and multi threat and integration is important, of course, the beauty full flexibility of the ARM model would be the most successful. Is that a fair way to think about it, at least directionally? I guess I would add the context that with increasing levels of integration, you're not going to have as many separate applications that are just a control plane box. And so the balance between control plane and data plane and the tight coupling of control and and becoming more important. It is something that's lending, towards SoC, certainly, capabilities. But also where the arm road map is going and even where our architecture partners are going. So I I don't think moving forward, you can assume the X86 will have, the, you know, single high threaded performance, advantage that you might see or it might have seen in the past. Okay. Very clear. I'd love to continue on on these very technical topics, but I'm conscious of time. And so maybe I would move on to like a broader perspective and I'd love to hear your thoughts on how you think you guys are impacting the value chain. So my first question would be very simply and maybe very candidly, could you tell us, who were your biggest sponsors when you got in networking, who we're all excited about talking to you, amongst ship manufacturers, of course, in one level above amongst equipment vendors and among service providers. And of course, there's a flip side of that question who was like kind of resisting and who perceived you at least initially as a threat? Interesting question. Yes, I think in general, the resistance to arm or to change is usually in pockets of the market where there is legacy, and people do not want to invest. But as we've mentioned, the transformation going on in infrastructures created real pressures, for carriers, for OEMs, and and then for the silicon partners that that supply them. So, I think it's fair to say that the market has really been looking towards a way to provide more efficient solutions. And so that is everything from kind of the physical solution, the higher density, better performance and power optimized to, as we mentioned earlier, a consolidation on code basis, being able to use more open source software, etcetera. So that all has really been positive in terms of the reception that we've gotten from, the various parts of the value chain. And I think you just need to look to things like, Lenaro which is a non profit foundation that, is dedicated to providing optimized Linux upstream and to to the Linux kernel around ARM based platforms. And in that group, we have, a networking group And members of that networking group include pretty much the broad spectrum of our silicon partners. So a Broadcom, applied micro and freescale TI. I'm going to leave probably some off, but, you know, the broad spectrum, as well as OEMs like NSN and Cisco. So we've had really, I think, a positive momentum and a good reception from the value chain. Okay. Great. And I think I'll probably just limit myself to just one last question and that will be my concluding question. You know, a lot of observers looking at this evolution and looking at a consolidation, a concentration of processor architecture, architectures and networking, we immediately think about commoditization. So if everybody says chips with arm architectures on them, this is going to be a commodity market. So some will argue doing networking chips is going to become a commodity business. Some would argue doing networking equipment is going to become a commodity business. And then like the conclusion of that whole line effect would be what Arm is actually going to increase efficiency, reduced price of equipment, etcetera, but it also means, Arm is going to badly hurt the profit pools of the value chain, what would be your reaction to that? Yes, I think broadly, Arm is enabling innovation in the right places. And I think it's really up to the carriers and the OEMs. And their strategies on, how this market evolves and what parts of the market are commoditized and which aren't. I think the Arm business model enables, the flexibility for these these new types of networks. And have to remember that networking chips are some of the most, I I would say they are the most complex chips, across markets. And so these types of things, Broadcom just announced a new switch chip, for instance, with trillions of transistors. They're very complex. And so that, I think, is going to keep the value up in the networking silicon market because there are many more components that go into it, in addition to the general purpose compute. Thank you very much. And I say it was my last question. So I'll refrain myself from asking like the minimal I would have, thank you very much for taking the time to do this call. I learned a lot And thank you all for taking the time also to listen to all, to listen to us. And I'll hand over to Jan for closing remarks. Well, thank you, Pierre. Thank you, Charlene. Certainly, I know that we've taken lots of notes at this end, and I think you've answered questions that, certainly I've been asked over the last few months about our opportunity in Enterprise Networking. So, thank you for everybody who's dialed into the call. We hope you've also found this, useful. Just so you know, we are planning a call in Q4 about, ARM's V Eight potential penetration into smart mobile devices going forward. There'll be more information about that on our coming up on our website later. In the meantime, hopefully, we'll be seeing some of you next week at our technology conference. There is an investor event associated with that on Thursday, 2nd October, this is based in Santa Clara. And again, information on how to get involved with that also on the website. So just finally, we have our Q2 results on the 21st October, and we'll be on the road immediately after So hopefully we're seeing most of you all there. And so thank you very much indeed and a good evening, good afternoon, good morning to you all. Thank you.