Thank you very much for joining us today at NTT PR/IR Day: Towards the Future with IOWN, despite your busy schedules. We will start in a moment. The restroom is out the door to your right, and this room is non-smoking. If you could switch off or turn your mobile phone to silent mode, we would greatly appreciate it. Today we have simultaneous interpretation. Please turn to channel one for Japanese on the device or channel two to listen in English. We have also distributed questionnaires. We kindly ask you to complete them, and after the event, please leave the completed questionnaires at your seats as you leave. Please note that today's proceedings will be streamed live online and made available on demand at a later date. We have a Q&A session today.
Questions will be accepted from the attendees present at the venue and from those who registered in advance and are connected via the conference call system. We will start the session at 3:00 P.M. Thank you.
[Foregin language]
Thank you very much for joining us today at the NTT PR/IR Day: Towards the Future with IOWN, despite your busy schedules. I am Eriko Kaneko, your moderator for today. Thank you. We are truly grateful to have so many of you participating both in person and online today. We have scheduled presentations on topics of high interest to our investors, followed by a Q&A session. We hope you will stay with us till the end. The event is scheduled to conclude around 5:45 P.M. Please note that today's proceedings will be available on demand at a later date. We appreciate your understanding. Now, let us start the program. Before opening NTT PR/IR Day, Mr. Riaki Hoshino, Representative Director, Senior Executive Vice President, and CTO of NTT, Inc, would like to say a few words and presentation toward the future with IOWN.
Low-power optical computing for the IR era. Mr. Hoshino, the floor is yours.
[Foreign language]
Thank you very much for taking time out of your busy schedule to attend our NTT PR/IR Day today. I am Hoshino, Senior Executive Vice President, and CTO of NTT, Inc The title of my presentation is Toward the Future with IOWN. Though our time is limited today, I will share with you NTT's initiatives on the future we aim for with IOWN, and our partner companies collaborating with us will give a wide-ranging talk on future strategies, product development status, and more so that you could deepen your understanding on our initiatives on IOWN. In IOWN 1.0, APN, and network have been pursued, but today we titled our presentation to Toward the Future. So what we are trying to do under IOWN 2.0, the optical computing, let me focus on optical computing. So the program was on the slide earlier.
I will first provide the introduction on today's session, followed by Mr. Tomizawa from NTT Innovative Devices, spearheading the development of photonics-electronics convergence, PEC switch, and leading collaboration efforts with other companies, and Mr. An-Jye Huang, Co-Founder and Senior Advisor of Accton Technology Inc, and Mr. Ram Velaga, Senior Vice President and General Manager of Broadcom, who I mentioned earlier, will join us to talk about the outlook for the AI era from the global perspective, where we hope you will gain insight into the potential of the IOWN concept, and finally, our Executive Vice President, Ms. Oonishi, will conclude by reiterating IOWN's role as the foundational infrastructure of the AI era. We will hold Q&A sessions in between, but due to the proceedings, the time will be limited, so we will hold an overall Q&A session at the end with President Shimada.
I hope we could ask many questions to deepen your understanding. Although time is limited, I hope you could understand IOWN more. Now, under the title Toward the Future with IOWN: Low-power optical computing for the AI era, I will introduce NTT's IOWN vision, focusing on our progress. We have many attendees joining us, including online participants, and we recognize that your prior exposure to IOWN varies. Some may have heard this a lot, and for some, this may be new. I'd like to first take a moment to elaborate on the aspects I plan to cover in today's presentation.
First of all, as you can see on the right side, APN, All-Photonics Network, been focusing on network until now, but this photonics-electronics convergence, PEC devices, and DCI, data-centric infrastructure are also the components to use the computing in a decentralized manner. In the world of athletics, APN was used. We are starting to see some actual use cases. So the implementation is ongoing, but as I mentioned earlier, we are looking beyond towards the future. So this initiative aims to complement the limitations of electronic circuits like power consumption, speed, and heat generation with optical technology, and thereby supporting the next-generation information processing infrastructure. In particular, we have discussed how photonic-electronic hybrid devices can be leveraged to enhance the power efficiency of optical computing resources, which will be utilized more extensively than ever in the AI era. So this is my topic today.
I think you've heard this many times, but data center and many power-consuming devices are increasing. We are entering such an era. With the growth of AI, we've said power consumption will increase, but we have not, the numbers differ. But regardless of the level, it is definitely increasing. Of course, the data center consumption is increasing, and AI market expected to grow 20 times compared to 2021. And so the implementation of new technology is a must. We have been saying this from a few years ago and been promoting IOWN. But now other players are taking various similar actions, and our actions are in greater demand with the passage of time. So the general direction of the infrastructure is mainly twofold. First, reducing total infrastructure power consumption by controlling power consumption of networks and individual devices, and efficient infrastructure operation by making effective use of distributed infrastructure.
Efficient infrastructure operation encompasses not only sharing resources for effective utilization, but given the current electronics power situation, we are increasing usage in regions like Hokkaido and Kyushu. By diversifying the location, we can pursue more green renewable energy, reducing total infrastructure power consumption. You may know this, but in reality, AI uses GPUs, but GPU server contains not only GPU, but also CPU, memory, and storage, and communication is conducted, and GPU cluster functions as an assembly of multiple GPU servers connecting via high-speed switches. And therefore, with the increase in use, these communications need to reduce the power consumption. Photonics, the consumption does not change with higher speed, but with electricity, the communication frequency increases and the consumption increases accordingly. So the photonics was sufficient for long distance. Here the horizontal axis is the transmission distance, and the vertical axis is the power consumption.
The blue line, once the power, the speed goes up, the consumption goes up with electrical. So within GPU servers, shifting to photonics will be a must. Now IOWN Roadmap, IOWN 1.0 PEC-1. We've been using this, but in computer chips, more connections will be seen. So you see PEC-2, 3, and 4. So the package then combines, there are many multiple packages on the board. The photonics was not on the board, but PEC-2, the PEC element is mounted on the board for connection. Now, in Kansai Expo, we had many use cases, Perfume and Cho Kabuki, and remote production of expo coverage for TV broadcasters. And IOWN computing is inside as well. Outside the NTT pavilion, you can see the shading curtain, and with the people's expression and joy and emotion, this curtain waves.
This is realized with the IOWN photonic disaggregated computing, incorporating PEC switch. This switch, so the surrounding part, where you see the blue part switchboard, the PEC device is attached. And ASIC, this narrows the transmission distance, and the electrical distance is reduced to reduce the power consumption. So before ASIC, the photonics engine PEC-2, NTT Innovative Devices developed this, so he will come and talk later. This Innovative Devices, you may know the company. So NTT is incorporating these devices, AWG, splitter, and in this transmission, we have pluggable package, and the waveguide and the devices to reduce the heat dissipation are developed. So this PEC part is developed by NTT Innovative Devices. So we will elaborate on this later on.
And the other one, this in making this PEC switch, this box and switch ASIC, and the PEC are combined, and the switch ASIC Broadcom, and the switch is made by Accton. So the two companies are here with us to talk about the potential of PEC-2 and the optical computing and the necessity thereof. We need to have this perspective and this knowledge. Lastly, beyond going beyond, we will see higher speed. So now it's ASIC. We only need to connect to ASIC, but eventually we need to go inside. We need to have photonics inside. So PEC-3 is also being prepared. So we're preparing PEC-3, and we need to put this into the package. So the chip is miniaturized and made very small. PEC-2 also, but as we compete with other players, the performance, the yield need to improve.
So for PEC-3, we need to stay ahead of others. Now, the actual manufacturer, Innovative Devices, Mr. Tomizawa will talk in more detail. Vice President Hoshino, thank you very much. Next, realizing IOWN 2.0 development status and future outlook of photonics-electronics convergence network switches from NTT Innovative Devices Corporation, Senior Executive VP and CTO, Mr. Masahito Tomizawa, I will be giving you a presentation. Please go ahead.
[Foreign language]
Ladies and gentlemen, hello. Thank you very much for coming here today despite your busy schedule. My name is Tomizawa. It's very nice to speak to you. I came from the company called NTT Innovative Devices Corporation. This company is a subsidiary company of NTT, and within the NTT group, it is quite a unique company. The actual hardware and devices are developed here, manufactured, and sold. So very unique company within the group.
Within NTT, the technology that has been developed and researched, that is all put onto the devices, sell it to the outside, and have everybody use it so that it will contribute to the people's lives, work, and society. Under this title today, I will be making a presentation, and some detailed technical explanations may occur, so please forgive me for that beforehand. I'd like to spend a little bit of your time today. Right now, it's called optical photonics. Photonics, NTT has been working on the optical communication for a long time, and recently, the importance of it has been increasing. With the keyword IOWN, this importance is spoken about. When we realize IOWN, we'll be able to realize this. In order to be able to do this, we are continuing to take on this challenge to realize this.
In fact, IOWN's PEC-1, APN, the word came out, as Mr. Hoshino mentioned. In reality, distance will disappear, including latency. At PEC-1 or IOWN 1.0, it started to become to be realized. But today, I'll be speaking about the right-hand side to solve the power issue at the data centers. If we use IOWN 2.0 PEC-2, there's a data center on the left-hand side that's showing a lot of smoke coming out. It will change to a society on the right-hand side where it's more green.
So I would like to keep this in mind while I make my presentation today. So in the world right now, the word AI is spoken about a lot, and there's a large language model. And I just compared it to the conventional legacy cloud services. So there's tasks here. Conventionally, the size was small, and the processor's performance or capability was larger.
One processor's capability was shared with multiple tasks, and virtualization was the key technology. The virtualization technology was the key technology to process these individual tasks. But when it becomes LLM, the task size becomes humongous. With the one processor, the tasks will not be completed. To complete one task, it will require more than 1,000 processors. What is necessary here is that the communications network between the processors is going to become important. The data rate, the enormous amount of data rate is outputted. Therefore, the answer to this situation is only optics. Inside the data center, between data centers or inside data centers, the optical communications technology being used and connected, and the demand for that is accelerated, meaning that the need for optical links between processors is accumulating.
The horizontal line is the computational capability, and the vertical line is the processing power that is required. Just slightly before 2020, this angle started to become sharp and increased. This is since the rise of AI. So according to this line, we thought that we have to do something about this. Of course, the semiconductor industry, people thought about that, and us in the network and device industry, we have to compile a comprehensive power and catch up to this trend. It's starting to happen, and it's already happening. In order to increase the capability of the processor, there are various directions that we can follow. The processor is made by semiconductors. I call it the three pillars of semiconductors. We need to combine these. The first arrow with the advanced semiconductor miniaturization, it started from 7 nm.
We have come to 2 nm, and from now on, it will be sub-two nanometers and angstrom. So we need to continue to go ahead with miniaturization, and in Japan, we are once again putting quite a focus on semiconductors, but with this, it's not enough, so the second arrow or the pillar, this semiconductor chip, when it's packaged and implemented, the technology that was required was a two-dimensional, but we will need to do a 2.5-dimensional or three-dimensional packaging, or else we will not be able to catch up, so this is the second arrow, but today, what I'd like to focus on is the third arrow, which is the photonics-electronics convergence technologies integration. As Mr. Hoshino mentioned before, the data volume going up means that the frequency will go up as well.
And that with a low amount of electricity implementation, if we're not able to do that, the power issue is going to become very severe. And to construct new data centers, we will need several nuclear power generation plants in addition. So by using this technology within a realistic power consumption, we have to increase the performance of the processor or the capacity of the processor. So this is what I would like to talk about. So the semiconductors process, this is the first arrow, and miniaturization is something that has been discussed. In 18 months, the semiconductor integration record doubles in its integration rate every 18 months. And so the red line and the green line, please. So there's a frequency and power consumption. As we go ahead with the years, it's not increasing. It has hit the ceiling.
So the frequency and the power, how are we going to increase that? With just miniaturization, that is not enough. That is why, as seen in Mr. Hoshino's presentation, the optical technology comes about. The vertical is power consumption. The horizontal, excuse me, is the transmission distance. Please look at the horizontal. It's in centimeters. It was in 100 m units, but now it's centimeters. The power at this distance is extremely important. The blue line is the electrical wiring, 10 GHz and 20 GHz. In reality, it's a 50 or 100 GHz era. This graph is not going to be enough. It will go beyond this graph. For the transmission distance, even though it's 30 centimeters, the power consumption doesn't go up. If we use the optical wiring, you'll be able to have a clean, eco-friendly wiring.
So inside the data centers or inside the servers or inside the semiconductor chips, all the way to there, the optical technology or optical signals will be incorporated very shortly. So this is a timeline, and this is the roadmap that our company is aiming towards. So from the left, PEC-1, photonics-electronics convergence. So you see PEC-1, 2, 3, 4. On the left-hand side, already at our company, we have a commercialized product mainly for the telecom industry. We have the silicon photonics or the DSP PEC device. It's out there. This is PEC connecting data centers 100 km or 1,000 km with a 400G, 800G, 1.6T. Next is already commercialized. Next is 3.2T. And it's going to progress more. But category-wise, this is going to connect between data centers. This is an important device. So the category is the existing category. But today, what I'm talking about is PEC-2.
This is inside the data center between the boards. Right now, it's connected by electricity, but there's the copper wire and the heavy cables that are used, but we want to replace that with optics. So a switch ASIC is here and around there directly from electronics to optics, from optics to electricity, and converging devices will be surrounding this, and the shorter the distance, less power consumption. So 1.6 Tb, 3.2 Tb, and we're developing 6.4 Tb. I will talk about this later. However, we have a model of that. So based on this, the actual switch function. Well, this is a box that will realize the actual switch function, and we'd like to provide that and launch this. So I will be talking about the transmission distance, about 10 meters, and next is PEC-3. This is going to connect package to package.
So the distance is around centimeters, short distance. So this is about 2028, a commercialization target. And in the future, inside the package, within the millimeter distance, we would like to connect that as well. And this is in the roadmap. And the research laboratories of NTT are working hard towards achieving this target. And we are seeing optical signals or optics. And so I'd like to explain why NTT for this area. This is the technology of NTT. And this is the history of optical innovation of NTT. Early 1980s, the optical fibers mass production, the good quality mass production method has been developed at the NTT laboratories. And since then, from 100 Mb to 10 Gb, 100 Gb, 1 Tb. So the horizontal is commercialization here, and you'll see the data rates all the way up to one peta. So now it's called the generation three optical.
So we have this progress. And IOWN targets is one peta of data rate. So at NTT Innovative Devices. But before that, the former name NTT Electronics, a contribution has been made in these devices, laser or glass circuit chip or new format or the new digital processing, digital DSP, COSA. So having these variety of products, it's been contributing to the development of optical communications. And how much of it has progressed? If you look at the history in 50 years, it's a multiple of 1,000. So by being supported by these devices, the optical communications have spread in the world or penetrated in the society. And so with this, your phones or the PC and mobile phones and the combination of smartphone and cloud. And right now, we're in AI. And these variety of services were realized.
And on the right-hand side, this is showing NTT's digital signal processing DSP using ZR. A product is used as 72% within Japan's network. And in the United States, it's a 14%+ alpha is used. So when you are touching your smartphone through our devices, the signals go through your phone and go to Japan or the United States. And so miniaturization and shorter distance or mass production, having that base, we would like to realize the targets of IOWN. Now, today, I'm talking about switch. But the conventional switch, what does it look like? So Broadcom, that will come up to the stage later on. This is switch chip for information processing. It's here. Conventionally, it was at the entrance of the box, the module, the optical communication module. And the distance was about 30 cm. This distance was the problem. It's switch interface.
The power consumption is large. If the distance is big, it will amount to a large amount. First, we had to narrow the distance and reduce the chip power consumption down to 1/10. The consumption is lowered. We decided to develop with a switch. This is the top view. Prototype was developed. Here, the optical engine that we developed, the PEC device, is used. Now, how much impact this had is in Osaka. There's the Kansai Expo in Osaka. In NTT Pavilion, our device is used for a new type of switch, a new type of computer architecture, and the associated software, the optimal software. Altogether, the computer power consumption is down to one-eighth of the conventional type. Here, you can see NTT Pavilion. The center APN connects the two. Real-time AI analysis result is transferred.
It's not just our device, but the architecture, software, all combined. This is the impact that we enjoyed. So the prototype is great. Now, what about the actual business and the actual product? You can see in this diagram. Our product is here, this silver, long silver box. This one, this long silver box. So this is what we call optical engine for 0.6 Tb, 6.4 Tbps . And this is the switch module. In the center, Broadcom switch chip is mounted and is assembled into this box by Accton. We will hear from them later. This box is assembled into rack and installed in the data center. So this is the project that is now ongoing. The commercialization target is Q4 next year as module. We will offer the sample. Now, the verification of the component characteristics is in progress. The manufacturing capacity per line is 5,000 per line.
So we will have multiple lines to meet this large AI demand. So the production structure line is now being built as we speak. Now, where in the data center will this be used? First, in the data center, there's scale-up domain and scale-out domain. This light blue is rack. In rack, you can see the switch, and they're connected with each other accelerator. And this is scale-up. Now, between racks, we need to connect with switch, which is scale-out. In the short term, we are now doing scale-out. So PEC switch will be first used in the scale-out domain. After that, scale-up domain, the light blue part of the photonics will come in. And then blue and blue will be connected through scale-out with bypass. The domain switch will also be necessary in the future.
One, two, and three is the order that we will utilize the PEC switch. This is our current projection. Now, our PEC-2, this is the model. The commercialization will be explained. The switch is 102.4 Tbps capacity. Tera is 10 to the power 12. So 100 Tb is 10 to the power 14. In one second, 10 to the power 14 bits. Switch alone, I mentioned one-eighth down to one-eighth power consumption. Compared to the conventional one, the power will be about 50%, 50% power reduction with switch alone. Now, partnership. We will hear from them later. Broadcom is supplying ASIC chip. Our module is assembled into the box by Accton. This is the partnership we now have. From 2026, we will launch in the market the sample in Q2 and switch module and Q4.
In any case, by next fiscal year, we will realize the commercialization. Business negotiation with end users is also progressing. Now, compared to other players' products, how are products? This is shown on this slide. CPO, co-packaged optics. We see this activity around the world, but ours has been saying IOWN since 2019 and have been working on this all since then. We started earlier. What is our advantage? Lower right table, please. So others, no, we have to be close to LSI, so we need soldering. These photonics need to be close to LSI. Oh, but. In our case, we're socket, not soldering. The optical engine can be attached and detached. That's our difference. So one optical engine interface breaks down. Here, you have to replace the whole thing. But in our case, we can replace just this one part.
So the repair cost can be reduced. And interface, various interface. There are various requirements, short and long-term and wavelength division. So it can flexibly support various requirements. So this is our method. And data center is increasing, and the location changes. So short, long distance changes every time. So we can flexibly deal with that. So that is our advantage. In the future, Broadcom will probably touch on this later. We cannot just cover with one supplier. So in the future, multi-vendor, multi-supplier ecosystem need to be built to change the industry. So by flexibly attaching, detaching, this standardization, we can move, and we're in line with the future standardization. So that is our difference with our competitors. Now, no problem with the socket? We are often asked. And this lower left shows that it's no problem. But so basically, no problem. I will not go into technical details.
3.9 pJ per bit power efficiency and shoreline density, 0.4 Tbps per millimeter. This is the goal and commercialized by next year. Now, the list of partners. I mentioned Broadcom and Accton. In Japan, we have the substrate manufacturing and module assembly. We have partnership with Shinko. And so this is our team and our product module looks like this, but optical engines are all separate. We, of course, design and manufacture this, but we have other affiliates that we invest in. We will utilize that resource and fully leverage the comprehensive capability and then Accton assemble in the box. Through this partnership, the overall design and the key part will be done by us. The coordination with the partners will also be done by us, overall coordination. PEC-2 will start from next year.
And beyond that, next is I don't think many players, or very few players, even around the world, are saying this yet. NTT Research Lab has unique proprietary technology. We have compound semiconductor. It's a very thin device, our proprietary technology. Using this, this optical engine can become a very tiny 8 mm x 10 mm chiplet. And GPU can be attached to GPU and installed in the package. And this package communication becomes optical. So low power, I mentioned 3.9, but this will be 0.26 down to 1/10. So we are working on commercialization on this too. And this is the demonstration result. It's a very thin membrane optical semiconductor. And the waveform, electric waveform is made photonic. So 0.26 pJ or 0.14 pJ.
We are trying to work for the practical usage so that we can keep up with the global peers and gain momentum to this industry. I'm sorry, I'm talking a bit long, but for full-scale implementation of AI is coming. Huge amount of processors need to be connected by network to realize the computing. This optical communication can be connected with low power consumption. We will leverage our extensive experience in optical communications to take on global challenges. Thank you very much. Thank you, Tomizawa-san.
[Foreign language]
From here, we would like to accept your questions. Now, questions. Vice President Tomizawa and Hoshino, both of them will answer. Due to the time limitations, for the questions this time, we would like to take two questions from this venue only.
For those questions that we were not able to receive your questions now, at the end of the program, we have a separate overall Q&A session. So please ask your questions at that time. Those who have a question, please raise your hand, and the staff will bring a microphone to you. So once the preparation is completed, we would like to receive your questions. Just a moment, please. Thank you.
[Foreign language].
We are prepared. So once again, Vice President Tomizawa and Hoshino, thank you for having this Q&A session.
[Foreign language].
So for those of you who are at the venue, we would like to receive your questions. So if you can kindly raise your hand. So in the back side of the room, the gentleman there, we will be bringing you the microphone. Just a moment, please. Thank you very much.
My name is Ito from the editing office of NewsPicks. Thank you very much for a very easy-to-understand explanation. You will be moving on to PEC-2. Until now, the media included had the PEC-1 image. Excuse me. We had the image of PEC-1. Then from there, it's going to move into optical computing. There's a lot of expectations, and I felt quite a potential. I have two questions. First is probably what you have to resolve is the saving energy, lower energy power consumption at the data centers and others. There are other players that are feeling the same thing. Even though it's not optical computing using IOWN, there are companies that are trying to resolve this challenge or issue with other technologies. My first question is a very straightforward question. Who are your competitors?
The second question is differentiation or your competitive edge. You said the soldering type and the socket type difference. And if the competitors catch up with you and start to come out with a similar technology, but IOWN is your unique concept, but soldering or the socket by changing the manufacturing process, can others copy your products? So moving on to PEC-3, PEC-4, from your perspective, or which part is the competitive advantage you have? And what is the core part of your technology?
May I? Thank you very much for your question. Regarding the competitor, probably you have already heard via news, NVIDIA, or Broadcom, who will be making a presentation later. They have their own solutions, proprietary solutions. So if I can call them competitors, there will be competitors as well. But as you will hear later, there are competitors.
But as a multi-supplier, the integrated connectivity is secured, and we both should develop or else the multi-supplier sought by the end users or horizontal deployment of the technology and products will not be able to be realized. Therefore, the right-hand side, we shake hands, and the left-hand side, we fight each other. I think that is what the situation is. NVIDIA, probably globally, is the largest player in the world. NVIDIA is working on reducing the power consumption at data centers. What you're trying to do, they're probably going to use a new process and try to resolve that. What is the advantage? Well, technology-wise, they're not doing the socket. Last week, we've actually met with them, and they were quite interested in our approach.
So information exchange or some it seems like there's a room for them to use some sort of solution that we can provide. 200 Gbps per lane is what they're able to make for us. But with their silicon photonics, that's probably the limit. When it goes up to 400G, the speed is going to double. So it seems that they are facing a wall a little bit right now. So right now, we are starting to have quite a close communication. So that means that the strength and the technology that both companies have utilized that and approach to resolve the challenges. Is that the correct understanding?
Yes. And at the very end, at the fiber, we have to combine the optical technology. And with how much of a short time are we able to realize that? That technology is what they want to see.
And that part is our proprietary know-how and technology we have. And that's the place where we can actually contribute towards them. Thank you. And also, comparing with the other companies, what is our core strength?
Well, as I mentioned, 400 giga per lane technology and the optical fiber convergence technology, the yield, or we have a good throughput manufacturing technology. And regarding the design of silicon photonics, we have a slightly different way of doing that. Like the process of semiconductors, we have like a usually there's a design kit. So it's like Lego blocks, and it's how you actually combine them. And you can have the silicon one. But we actually optimize each one of those components. So the performance is different than others. That is the core that we have.
So if that is so, there's companies who use this standardized package and others that you have each one of them components, unique technology. So there are some areas where one can totally win and cannot. So we need to decide which place. Area that we have the advantage and win and enjoy the advantage. That's where we need to actually start to be able to identify.
[Foreign language]
We will take one more question. So the gentleman at the front with the jacket, please. Freelance journalist Takayama is my name. So I have a question on the market. IOWN PEC market size. It's not just PEC device, but also the upstream. I think the market is huge. So data center and base station. PEC will be used in all these areas. So what kind of global, worldwide market size are you envisioning? And how much in Japan?
The PEC device will be the key, the core, I think, and PEC device, when innovative devices were established, Tomizawa-san said about JPY 100 billion market will be taken, captured by innovative devices. What is your projection? Manufacturing device, manufacturing market. How much market will innovative devices capture? How much market size do you see in Japan and the world? Thank you. Thank you very much for the question.
May I? Revenue and the business size. We will talk about that in the overall Q&A later. In terms of number of units, more than one million units a year, one million pieces a year need to be manufactured, which is a level we've never done before. We have to prepare our manufacturing structure and Japan and overseas ratio breakdown. First, we will have this used by hyperscalers, the U.S.
Data centers or the former GAFAM. We need to first offer a big volume so that the price will come down and then make it economically viable and deploy this entity in the domestic Japanese market. So first of all, I think it will be pretty much hyperscalers. Does this answer your question? I'm sorry, it may be a bit ambiguous. Yes, thank you. That's for device and for the upstream. Data center architecture and base station design will probably change. So NTT DATA data center vendors, you are now number three, third largest player in data centers. So what is your expectation? So with PEC, the market size may change dramatically. So what is your view on that?
Hoshino-san, if you could. So for data center, it's not just PEC. The distributed processing can be made possible. And AI business may expand. We have to expand our AI business.
So in closing, Oonishi-san will talk about some case examples of actual collaboration. So I hope you could take a listen. And if I could answer your question again in the overall Q&A session, if you could do a follow-up question then, I'd appreciate it. Yes, thank you. Thank you. So we will close this Q&A session for now. Thank you, Tomizawa-san and Hoshino-san.
[Foreign language]/
[Foreign language].
Thank you.
[Foreign language].
We would like to now prepare for the next session. So I kindly ask you to wait for a moment. Once the preparation is completed, we will like to resume.
[Foreign language].
The next session will be conducted in English. For those of you who like to use the simultaneous interpretation service, please set your channel to one. The next session will be conducted all in English. Thank you.
[Foreign language]
Thank you very much for waiting. We will resume the session.
Next, we will proceed with the session titled "Low Power Consumption Devices Required in the AI Era." Mr. Ram Velaga, Senior Vice President and General Manager at Broadcom, and Mr. An-Jye Huang, Co-Founder and Senior Advisor at Accton Technology. Mr. Tomizawa and the moderator, Mr. Michinori Sato, Executive Officer in Asia-Pacific Communications, Media and Entertainment Sector Leader at Deloitte Tohmatsu Consulting LLC This will be a presentation in a panel discussion format. Please note that this session will be conducted entirely in English. Simultaneous interpretation receivers are available for those who require them. Channel one is Japanese. Please take your seats.
G ood afternoon. Thank you for your time this afternoon. My name is Ram Velaga. I'm hoping this clicker will work. It's not working.
Hold on a second.
I will facilitate. Oh, yeah. Hello everyone. My name is Michinori Sato from Deloitte Tohmatsu.
I'm responsible for the telecom, media, and entertainment sector across the APAC region in Deloitte. Having closely followed the IOWN initiative for years, including the great experiences in the Kansai Expo by the good performance by Perfume, I'm very looking forward to facilitating the meaningful insights from all of you and for all you today. And we've heard that from Hoshino-san and Tomizawa-san regarding the future outlook of the PEC and CPO switches as part of their strategies. Like to hear deeper real voices from the three companies today. They are deeply involved in realizing the IOWN initiative and like to hear especially around the lower power consumption devices. Today, we have, as she mentioned, Velaga, Ram-san and A.J.-san and Tomizawa-san. So to begin, I'd like to have the brief self-introduction from both companies, including their product strategies and the history of their collaboration with NTT and expectations for NTT.
So firstly, over to you, Ram-san.
Thank you very much. Good afternoon. One of the things I wanted to share with you is research published by McKinsey that talks about the next few years. There's almost 124 GW of power coming online to support everything that is happening in AI. And when you think about this 124 GW, that is roughly over 70 million GPUs or XPUs. Now, just to give you a couple of data points to give you some context on this, some of the large language model companies are at a minimum expecting their compute capacity to grow by about three times every year. Some of them are expecting it to grow much larger than 3x every year.
But if you do the very simple math, for the next five years, if it grows by 3x every year, you're basically looking at over 200 x today's capacity is what's expected in the next five years. This is a significant amount of compute that is being added in the world to support the large language models. Now, when you do these large language models, one of the things that you realize in machine learning is it is a massively distributed computing system, right? Because any one GPU/XPU, what we call the accelerator, cannot do the work. You have to have many thousands, many tens of thousands of these accelerators connected together. When many of these accelerators are connected together, the network becomes the computer.
This is something that Sun Microsystems had trademarked 20+ years ago, almost 30+ years ago, and they said the network is the computer. This is a picture of Google's data centers from the early 2000s. What you see here is when they built a very large distributed computing system, which is the search, what they did was they took a lot of the CPUs and connected them with the network. In this particular case, 20+ years ago, the network is all based on copper, right? Because the speeds which were coming out of these CPUs back then are a fraction or a couple of orders of magnitude less than the speeds that we expect coming out of the accelerators today and in the next couple of years.
So when the network is the computer, and then you look at these accelerator deployments, as Mr. Tomizawa-san has mentioned. You think of the network as three different kinds of network. One that sits inside the rack and that's called scale-up, one that connects different racks that's called scale-out, and then one that connects actually between data centers and that's called scaling out across data centers or scale across.
Now, when you look at these accelerators, one of the things that is interesting is each of these accelerators has HBMs attached to it. You've all heard of HBMs. Today, shipping technology is HBM3. Tomorrow, the shipping technology is going to be HBM4. Each of these HBMs has roughly 10 Tb of bandwidth connection it has to the accelerator. So when an accelerator has four HBMs, it has 40 Tb of HBM attached to it, and that's the bandwidth. Next year, there's going to be eight HBMs, each running at roughly over close to 12 Tb.
So what's happening is you may have two accelerators. Each of them has 100 Tb of HBM attached to it, 100 Tb of HBM bandwidth attached to it, and both of these accelerators now want to talk to each other. That's the amount of bandwidth that's going to be connected between these different accelerators. Just to give you a sense for how large this is, today, if you think about a server that goes into the data center, you will likely have a NIC that is 50 Gbps speed. Now, when you think about an accelerator, each of these accelerators could have 10 to 20 Tb of bandwidth coming out of it. You're going from 50 Tb to 10- 20 Tb, sorry, 50 Gb to 10- 20 Tb.
So you're talking about almost a 200x increase in I/O coming out of an accelerator compared to a CPO. This is a lot of bandwidth. Now, the other thing that's happening is when you're building a large language model, you want to create a cluster of accelerators all kind of working together, right? And when it fits inside a rack, it's called a scale-up. And typically today, a scale-up is less than 100 accelerators connected together. Typically, you will hear a number like 36 or 72. But these large language models actually want that size of the scale-up to grow beyond 36 or 72. They would like it to be 200, 500 in the future over 1,000.
When you think about that problem and you say, "Okay, today, when you think about a scale-up inside a rack, it's all connected by copper, and you are okay connecting by copper when you only are putting less than 100 accelerators inside a rack. But when you're connecting 200, 500, or 1000 accelerators, it doesn't fit inside one rack anymore. You have to connect multiple different racks. When you connect multiple different racks, you no longer can use copper. You have to use optics." The combination of the fact that you have tens of terabits of bandwidth from each accelerator multiplied by either 200, 500, or 1000 accelerators, and the fact that you cannot use copper anymore creates a massive opportunity for optics to be the choice of interconnect. That is why you've heard other companies talk about co-packaged optics as the way forward.
Now, when you think about that and you say, "Okay, there's a lot of bandwidth that's needed, so how do you solve this problem?" First, you need to build the switches which have the bandwidth. Broadcom today has this device that's called Tomahawk 6. It's a 100 Tb switch device. This is going to be shipping in very large-scale production next year. Now, it's great to have the switch chip, but then you need to have that connected to other switch chips. You need all of these to be connected together if you're trying to create a scale-up cluster, let's say 512 or 1000, right? So that's what you need to kind of think about as you walk away from here is there's a lot of bandwidth coming from each accelerator. You'll want to connect at least 200, if not 500 or 1000 accelerators.
Copper is no longer going to work, and how do you do this? Now, obviously, you also have to be able to connect across data centers, not just connecting many racks inside a data center, so if you think about all of this, you say, "Okay, there are switches which are available, and Broadcom does a reasonably good job building these switches for sitting inside the rack, across racks, and across data centers," but what is really important beyond building switches is providing these interconnections between the switches, right? Because copper is no longer sufficient to create this interconnect between switches to create these very large clusters either inside a data center or between data centers. That's where actually our partnership with the IOWN team goes for a few years now, where we both realized that this is going to be a very large market.
If you think about the fact that the compute is going to grow, at least expected to grow at least 3x every year for the next five years, so that means it's about 200 x larger than it is today, and you will need to connect across racks. Optics is the only way to do it. Broadcom took the approach of saying, "Let's pick some of the best partners in the industry," and the leading one there is the NTT IOWN team, and we said, "Let's work with the NTT IOWN team to build a co-packaged solution." Because when you're building these co-packaged optics solutions, a couple of things are extremely important. One, people who are leaders in technology who can build very high bandwidth devices at low power but with significant reach.
Number two, you're able to produce them with a very high quality and yield because the volumes that are expected and needed in this marketplace are very, very large. And number three, somebody who's able to scale the production of these. We believe the IOWN team is very capable of delivering on all three of these, which is building the best technology, building it with the quality and the yield that is needed, and then being able to actually scale to very, very large volumes.
Now, what's interesting is we've always looked at it and said, "Okay, will customers deploy the co-packaged optics as a possible solution into these very large data centers?" And last week, Meta, who runs some of the largest data centers in the world, actually published a document that said they looked at co-packaged optic solutions, and they ran for about a million hours, and they found that this is better than running pluggable optics as the solution in the marketplace. So you also now have customers starting to feel very comfortable and confident about how to build large data centers with co-packaged solutions. Now, there was a question to Tomizawa-san previously that said, "Who's the competition in this space for NTT IOWN?" And he very honestly answered the different players who possibly can play in this market.
But one thing I would say to you is the following, which is when the market is so large, which it is going to be in the next couple of years, the end customer, which is the ones who's building the data centers, whether it is Meta, Google, or somebody else, they want to see multiple players in the market because if they don't see multiple players in the market, they don't have the confidence to go build a large data center on that technology. So the idea of silicon photonics, for it to be successful, you need multiple players. But we are extremely confident that NTT IOWN is going to be able to deliver on the best technology, the quality, as well as ramp in volume. And if they deliver on it along with the others, the market is big enough for multiple players to enjoy.
That is why we have very strongly partnered with Mr. Tomizawa-san's team for a very long time. Then lastly, our relationship with the NTT IOWN team is not something that is recent for the last three, four years working on this particular product. It has stretched for a decade plus. What we have seen is a consistent delivery of products with high quality, high volumes, but also with the performance that is expected in the marketplace. We are extremely happy that we have the 100 Tb switch available to be shipping in very large volumes next year. For it to actually get deployed, we will have very strong partners in Mr. Tomizawa-san and NTT IOWN to be able to bring very low-powered solutions and obviously very disruptive cost structures to our customers.
So with that said, I want to thank you and hand it over to A.J.-san.
Thank you, Ram. So over to you, A.J.-san.
Good afternoon, and I haven't did this in my last 20 years. I think I'm the oldest man in this room, and so in front of Ram and Tomizawa-san, I cannot talk too much about technical, so in my section, I want everyone relaxed. It's not too much technical stuff. I just want to share with you and answer your question is why we want to do this, and can we start from this page? This is our new factory, and we built this factory to produce Ram's product, so all the parts that we built from this building is all Broadcom chip. And it took us about two years to build this building. Four years ago, the building completed two years ago, and now the whole building is full, and the whole building is produced AI-related server and switch. And what I want to say is my dream.
Why we want to do IOWN? I know Tomizawa-san for many years, and I know Ram Velaga-san, Shimada-san about two years ago. They gave me the assignment, then I attended NTT ID forum. I realized how much NTT has invested in optical. So I would want to say without Broadcom, without Ram, it's no Accton today. But I want to say with NTT and Broadcom, Accton will rebound. So $3.4 billion revenue is last year. This year, we are shooting for almost $7 billion. It's almost double. I just want to echo what Ram said about the market size. Why we want to do this? Since day one, our vision is access. We want everyone to have access to computing power. Everyone can have their PC or their server get connected. It's our vision is access.
Our mission, basically, is a vision with a mission to make the product and technology be very cost-performance effective. So it's cost and connection. That's our company. And so this is just what I try to say without Broadcom, without Accton today. And with Broadcom and NTT and the new Accton, we'll rebound. The last one you see the NEL, and which I've been working with Tomizawa-san for many years. Yeah, when I need optical help, I come to him. So when I heard Broadcom and NTT doing this CPO, I'm very excited. So I immediately jump in and build a box. I think many people know this already, so I don't want to repeat. But the most important message I got from the IOWN book is to have everyone can be connected to AI, what I'm computing, urban and rural area.
And the AI should be fair to all citizens. This is what I learned from Shimada-san last time I met him. And I think it's very, very important. And in order to realize this, we need very advanced technology, which is from Broadcom for NTT. And we play a role to build a box, to reduce the cost. So I talk about my dream. So when I learned IOWN, I've been thinking about how can we apply this technology to Taiwan. Taiwan and Japan are very close. And in history, two countries have a very good friendship. So I asked Shimada-san to help me, asked all my good friends in NTT, see if we can implement this IOWN technology in Taiwan. The purpose is to connect all the different AI servers in the different locations. So you don't need to build a very huge data center.
You can just connect a different, even one single rack and connect together and can be looked at as one data center. That's the purpose. And what we want to try. So I saw this last time, and we'll continue to enhance this. And so we hope Japan and Taiwan, we can collaborate in technology in this AI application. And of course, the most important is the talent. So talent can share the knowledge through this AI server. So we started to do the experiment. So we have four locations get connected by IOWN. So start from the Tainan. So we say this is a science park. If we can have this science park connected by IOWN, then we can have the Tainan and Taichung, all the different cities connect together. So this is a use case.
This is for enterprise, and this is a case for hospital, so now the hospital, they can access their data, and they can share the AI computing power. Today, every hospital, they buy one NVIDIA rack or two NVIDIA racks, but they cannot connect together. The IOWN player is very critical and important and makes a lot of value of AI server. This is a real case from the south part of Taiwan, the science park. From this science park, we will connect all the universities. Actually, fiber, Taiwan, besides the telco, they also like high-speed railroad, like electrical company. The Minister of Education, they all have fiber. We want to maximize the value of the fiber by using the IOWN technology.
Of course, I think everyone talking about this for the global, and I hope we can contribute a little bit, start from Taiwan and start from our manufacturing and thank you to NTT. Thank you for Broadcom. They continue to invest in advanced technology so then we build this rack so IOWN will connect to this rack and so you can plug in the different memory or the AI or server. Yeah, this is what we are doing so we take the open architecture so the idea, also I learned from IOWN vision. They want to enable the more different technology and different vendor to contribute to make IOWN very successful so this is our small experiment and so far, it's doing okay. Of course, still a lot of technology we need to invest with NTT and Broadcom.
And one more example here is we're trying to do increase the port density, increase the port density. So how to make one optical lane transmit more data? Because to lay out the optical cable is expensive. So we try to maximize the capacity about 32 x. Again, this is NTT technology. Mr. Park will build. Yeah. Actually, this assignment from [Tomizawa]-san. So we met about a year ago, and he started to draw his idea on the whiteboard. Then I'm a little bit crazy, and I say, "We don't have this technology." So I find a company in Korea called InLC. I just acquired them to make the dream come true. And Ram already explained very well here. And we'll make this product very successful. And again, cost performance and speed. And thank you. Thank you to everyone. Thank you to Tomizawa-san, NTT, Broadcom, Ram.
Without them, we cannot have the good product. Thank you.
Thank you, A.J.-san. And again, thank you, Ram and A.J., for sharing the very clear strategic directions and the tangible business trend, including the major players in the market and the expectation and some aspiration with NTT. I again felt that technologies, including the CPO, are very critically important for realizing the higher bandwidth, lower latency, and the very low power consumption. Next, I'd like to pose a few questions from the AI market perspective. First one is around the additional growth market for the PEC or CPO, especially for the data center market. It is said that approximately more than 60% of data center investment are allocated to the ICT equipment, including the server and switches. As the data center market continues to grow, these related markets are also expected to expand.
So in addition to this trend, with the advancement of PEC, how large do you expect to capture market scale and share in the future? So firstly, Ram-san, how do you expect?
Okay. I think the way to think about it is there is a relationship between the compute growth and the networking that goes with it. So if the compute grows and this whole AI is growing, the networking will go reasonably proportional to it. And especially as we discussed, as the scale-up part of AI scales, all of that is going to be supported by the switch and the switch bandwidth. Now, obviously, from the optic side, the growth will be even faster because today, most of the first hop connections between the accelerator and the switch is copper. But now, you no longer can support copper when you're trying to do scale-up in the order of hundreds up to 1,000 devices. So you have basically a couple of things going on.
Computer is growing, but network is going to grow slightly faster than compute because the scale-up is going to happen on technologies like Ethernet, which did not happen today, and then optics are going to grow even faster because today, the scale-up is happening on copper, and that is also going to move more likely towards optics, so there's three compoundings that are happening here.
Thank you. So Tomizawa-san?
Yeah. Thank you very much. According to the market analysis, I mean, in the ICT equipment, 30%-40% will be for the server, and roughly 10%-20% will be for switch. My guess is that the CPO will start from 100 Tb, very high-end switches. According to my information or my opinion, the number will be a several few hundred thousand pieces per year. Accordingly, we need an optical engine, 16 x of our switching module. It makes several million pieces per year. Probably like 28x or 29 x, 27x or 28x . Our manufacturing capability, I think it is doable by using the multiple lines among. This is my perspective.
Sounds great. So the A.J.-san?
Yeah. I would look from the other angle. We're talking about scale-up or scale-out. The problem in this scale-out and scale-up is we had to consider one factor: protocol conversion. When you go through the protocol conversion, then latency happens. The goal is to pursue the optical from chip to chip and to the GPU to GPU. I look at the CPO from a broad view. If you look from this angle, the CPO will continue to grow. For example, probably next step, NTT will do with Broadcom, like a plug module transceiver with the CPO. Answering your question about the size, I look at the overall size, not only one component, but I think from chip to chip, board to board, everything. CPO will be a very important technology. That's what I look.
Thank you. Okay. So second question is around edge AI. Currently, most AI processing is performed via the cloud. However, the proportion processed at the edge site is gradually increasing, maybe you know. And it is forecasted that around 50% proportion is covered by the edge site in the late 2020s, maybe. And under that kind of condition, in what areas do you see CPO technologies or your company's products making significant contributions to these market conditions? So this time, Tomizawa-san.
Thank you very much for your question. Yeah. We are aware of that kind of tendency that the edge processing started, so looking at the AI processing, I think it can be categorized into two parts. One is the machine learning, and the other one is AI inference. AI inference could be done in edge processing, partly because of the requirement of low latency. However, still, we need machine learning, very, very heavy processing, and still, we need data center, cloud, machine learning across all of our GPU to GPUs, server to server, data center to data center. So still, we need our CPO solution and a little bit relaxed from the current requirement, but you saw the curve was very steep, skyrocketing, so I think it could be no change for the importance of the CPO solution.
Okay. So A.J.-san?
You're talking about? You're asking about the edge?
Edge.
Edge.
Edge.
Edge AI. We have a subsidiary. Name is Edgecore. Try to promote my company's name. So when we created the company, we called this No Fixed Core. There's no permanent core, and there's no limit age. So if we look from the other way, they say the training or inference probably is more easy to answer your question. And I think AI edge will be everywhere. It's how good we are, again, to create the most cost-effective device in this. We are working on another project. It's the access router. access router combined with the camera. And we work with one of Japanese customers. It's a historical site. So the camera would just to watch what happened and to prevent the fire. And so I believe edge will be everywhere. And inference will also have a different type of application. So again, it's no limit of edge.
That's what I guess. That's why to working as an open partnership is so important. It just reminds me of what happened about Tesla. When you think about Tesla, it's so successful. But the Tesla, when Elon Musk started to do the Tesla, he opened all the IP. He said he want more of the EV, not only him, which is just echo what Ram just said. More competition, more open. And if we can make this happen, I think edge AI will be unlimited. Am I right?
Great. Great. So Ram-san?
I think it's important to realize two things. One is if you think about it, today, generally, when you hear a lot about cloud computing, cloud computing is a virtualization use case. Because think about what happened in cloud computing. People said, "Hey, my CPU is not being fully used, so I want to run multiple applications on my CPU, so I virtualize it." So it's essentially running multiple applications to increase the utilization of a CPU. Whereas when you think about machine learning, it's the opposite, which is any one accelerator is not large enough to run the workload. So you have to have many of those working together as if it's running one large system. That's what even high-performance computing was.
So first, once you realize we're moving from a world of cloud computing, which is virtualization dominated, to a world of machine learning, which is a distributed computing system. In the distributed computing system, network plays an extremely important role. Now, the distributed computing system could be doing training, in which case they're running in very large data centers in the middle of nowhere in the world. Or eventually, these distributed computing systems are actually doing inference. Because even when you're doing inference, they have a cluster of 100, 200, 500+ , even 1,000+ . Because when you think about inference, there's two kinds of inference. Inference, which is just trying to give you an answer, or inference that is actually what you call reinforced learning into training.
And really, a lot of people are doing inference for reinforcement learning because the whole idea there is that we've run out of useful information on the internet, so you have to create a lot of useful information. So there's different kinds of inference. But long story short is we are in the world of distributed computing, whether it's inference at the edge, inference someplace else. And network is a computer. And copper is not going to suffice going forward. And you need optics to come solve this problem. And you need something that is cost-efficient, power-efficient, and scalable in volume to bring together all the compute that's expected, whether it's at the edge or in the middle of nowhere in a desert.
Thank you. Okay. So the final question is business partnership. Looking back over the past few weeks, we have had several announcements regarding the partnership between the semiconductor-related companies in Japan and the United States. As you move forward with the market deployment of the CPO technology, which player do you believe you should strengthen with the partnership, of course, besides your three companies? And additionally, on top of that, what aspects of your own company would you like to highlight to that potential partners? So start from the A.J.-san.
It's a very tricky question. Actually, to me, partnership is very important, which is the trust and with the capability and with the long-term partnership. So it's not easy to find. Maybe you can find a startup. They have good technology, but they don't have a long-time history and partnership relationship. Or maybe they have very narrow technology, narrow coverage. But to me, NTT and Broadcom technology is good for me to keep me very busy.
Okay. I think, or rather, we are living in a world right now where the cutting edge of technology is being built, the most cutting edge of technology, and some of the hardest technologies being built. We would like to partner with somebody who has the cutting edge technology, who is bringing these to market at scale. And obviously, with the quality that's needed, that's one. And two, I will tell you, you might not like this answer. I want to partner with somebody who has a lot of money. Because if you think about what's happening in this world of AI, people are talking about $500 billion plus of CapEx being spent just by four or five of the cloud players today. And you now talk about the amount of GW of data centers that are being talked about, 200+ GW of data centers.
You're talking trillions of dollars that are needed all the way from these people who are building these large megascale data centers down to people who are building fabs and people who are building coolers and chillers and factories and everything else. I think for this AI world and everything that we've talked about today to be realized, you need people who have a lot of money. I think some governments have a lot of money, so I'd like to partner with them along with these companies.
Thank you. What do you think, Tomizawa-san?
Yeah. Thank you very much. Definitely, we need collaboration with the end users, for example, hyperscalers and as well as the neocloud player. Without this kind of collaboration, it is very difficult to go forward. And actually, we are already communicating with the potential end users directly, sometimes jointly with Accton, jointly with Broadcom. But this kind of early engagement is indispensable for getting launched into the new market. And the second collaboration, definitely, we need our collaboration with the processor guys. Actually, we get started with the processor guys, like Big Tech processor guys. And because of PEC-2 or PEC-3, we should consider the integration with the processing circuit. And for this purpose, we need some early engagement with the processor guys as well. Number three, according to our Q&A session, I described with some potential competition and collaboration with our potential competitor.
Because as Ram said, the market is huge, and every end user needs multiple suppliers. So in that case, we need agreed interface, agreed specification in some standardized way. So these three are very important for that collaboration besides the three companies.
Okay. Thank you for the clear answer. So now we had very insightful voices from the three companies around low-power devices required for the AI era. Especially, we learned about that against the limitation of the electronics processing, the PEC and the CPO switches is mandatory to conquer this kind of physical situation. And on top of that kind of technology advancement, the ecosystem, I mean, the collaboration across the, let's say, important and less important even stakeholders, depends on the situation, is very important to this, let's say, high-speed changing ICT market. Yeah. So again, thank you, Tomizawa-san, Ram, and the A.J. for the very insightful real voices. Thank you. So I'd like to close this panel and having the question from the audience.
Hi. Thank you very much. From here, we would like to take questions from the floor. How we will be taking questions is what I would like to explain. First of all, for the questions from the media-relevant people who are here at the venue, I'd like to receive the questions, followed by questions from the investors from this venue, and after that, for those of you who are connected to the web system online remotely, we'd like to take your questions. For those of you in this venue, the staff member will bring the microphone, so please raise your hand. For those of you who are participating remotely online, those who have a question, please press the raise your hand button on the web system, and once you receive the request from our side to unmute, then please unmute and ask your question.
After you ask the question, until the answer from our company ends, please remain unmuted. So thank you for waiting. We would like to take questions from the media people who are present here at this venue. You can ask your question either in English or Japanese. For those of you from the media, are there any questions? Thank you very much. Then from you, please, over here.
From Nikkei, I'm Takatsuki. Hoshino-san made the presentation as NTT. Why Broadcom this time in the production of a PEC device? They're going to collaborate with you because they have their own proprietary solution that's being developed. What kind of meaning, significance does this have? I would like to know that.
To Ram-san.
Yeah. Yeah. Look, I think if you look at this switch, this is what we do. It's a 100 Tb switch. And when you look at it closely, you'll basically see it has a lot of I/O coming out of this. And when we partner with the IOWN team, we look at it as we want to make sure Broadcom wins in the switching business. And for us to win in the switching business, we very, very strongly believe in the idea of openness. So Broadcom never has actually created vertically integrated solutions. Every time we've created products, we made sure they're very clean, open, standards-based interfaces, and that multiple partners can actually build their solution and ecosystem around us. So what we tell our customers is, let the best product win.
Because if you think about it, we build our switches, but if our optical technology is not the best in the world, we don't want to lose the switch business to somebody else because we don't have the best optical technology in the world. For us, the optical technology is an enabler, and it's just one part of the ecosystem. So as long as the IOWN team is able to produce a device that's at least as good or as better than us, we want them to win. That's extremely important. And because at the end of it also, our end customer looks at it and says, if they only have one vendor, they don't want to buy the solution. Because they'll think of it as, hey, Broadcom is vertically integrated, and if I work only with Broadcom, I cannot make this supply chain very scalable.
So it's in the interest of Broadcom as a company to make sure the ecosystem is very, very successful, so Tomizawa-san and NTT IOWN has access to all the same specifications on our switch that our internal team has access to, which is why their ability to win or lose is purely a function of their technical capability, has nothing to do with that Broadcom has both these products, and also one quick thing I'll mention to you about Broadcom is we are a large company, but inside the company, we are very distinct technology business units, and my business unit builds the switches. Another business unit builds the optics. From my business unit standpoint, I don't care whose optics win. I just want to make sure I have the strongest ecosystem surrounding my switch, so it actually creates a very healthy ecosystem.
[Foreign language].
Thank you.
I hope that answers your question. Any other media participants in this room, please wait for the microphone.
Technology journalist. I have two questions for the Ram-san and Huang-san. First of all, Ram-san says the Tomahawk 6. My question is about the Tomahawk 6, has it connected with the 512 XPU? Is this a limitation of the such as XPU connection of the XPU? Do you have some kind of the prospect for the future of the breakthrough of the limitation?
Okay. So the question was, when we showed something like a Tomahawk 6, why do I have a 512 number? When you're doing a scale-up cluster, you want to make sure there's only one switch hop from one XPU or accelerator to another one. You can always create a larger cluster by creating a two-tier switching architecture. But for scale-up, you want to today keep it to a single tier. That means just one switch hop, accelerator, switch, accelerator. Then you look at the switch and say, how many links does this switch have? This switch today is 100 Tb, and each link is 200 Gb. So you have 512 200 Gb links. So I can connect to 512 XPUs.
But when I connect to 512 XPUs, because I cannot put it inside a rack, I have to put it across multiple racks because each XPU takes a lot of power. So I'll put it in multiple different racks, and I'll have these switches connecting directly with one hop. So that's what limits it to 512. We have 512 200 Gb 30s that gives you 100 Tb. And then, as you can imagine, in the future, we'll have higher bandwidth. That means you'll have higher lanes at 200 Gb, and then you will increase the cluster size of the scale-up.
Thank you very much. The other question is about Mr. Huang. I am very impressed with the network of the university and hospital and data center. But why did you select that hospital? This is a very strange situation compared to Japan because many of the hospitals have a lot of the individual specifications. So the connection is very difficult, very tough. So I'm very surprised by your easy, or I'm not sure, but easy connection to the hospitals, to the data center, and the university. What is the reason why you selected the connection of the hospitals? The hospital has a lot of the privacy or some privacy information. It's very difficult to do in Japan.
Thank you. This is a very good question. And actually, it's a true question here. Why I chose the hospital? And the reason is hospital is very critical in our society. And we want to help the hospital they can utilize more AI. I think this happened to every hospital. So hospital or the medical, they want to combine the AI with medical. This is what happened now. Of course, according to my experience, the gap between two parties is very, very big. Medical doctor, they don't want to learn the IT. And the IT people doesn't understand medical doctor's concern. Because you are right, medical have a lot of regulation, rule. It's not easy. So what we are trying to do is to build the infrastructure first. So this is what I just mentioned in my previous presentation. Probably I didn't make myself clear. We want to build infrastructure.
Ram mentioned about AI is networking. Networking is AI. Without networking, AI becomes very difficult. So what we try to do is build infrastructure. The same concept is for Taiwan. So I try to explain to the people who I know in high level of government. I say, no, AI infrastructure is no AI. So this is the first one, the question. And the second one, why I want to choose this hospital? Actually, because I'm the member of this hospital. So this hospital is a charity belong to a charity foundation. And I'm the member of this charity foundation. So to me, I just try to do everything I can to help them to build the infrastructure. This hospital, they have a hospital in a very remote area. And a very small hospital just help the remote area.
So I want that small hospital can also use the AI in the main hospital. So this is the idea come from. Yeah, very good question. Yeah. Thank you.
Thank you very much. So for those investors that are here at the venue, I would like to receive your questions. Any questions from your side? So the person in the back row, my name is Tokunaga from Daiwa Securities. I have two questions. The first question, sorry that I'm repeating. The solutions of Broadcom's and NTT's solutions, the question is from the competitive perspective. When I compare the two solutions, what are the strengths that each one has? And is there a difference in the approach towards the CPO or strategy difference? Can you comment about competition? And the other is the production capability. Looking at Tomizawa-san's presentation, you were saying 5,000 units per month as the capacity. If there is a larger demand, I think you need to speed it up a bit more. But what is the bottleneck to do that?
Is that a customer issue, or is it the production line issue? Can you answer those two questions? In Japanese? Yes, you can answer in Japanese. The difference between us and Broadcom, as mentioned in the presentation, we have the optical engine that can be replaced, meaning that if one optical engine fails, the whole product does not need to be repaired. So it's a low repair cost, and also we can flexibly respond to various media requests, or if the rack needs to be transferred, we can actually respond to that as well. The composition changes, we can respond to that, and that's our strength, and on the other hand, regarding Broadcom, the timing was slightly a bit ahead or before us. Maybe that is the difference, and also, sorry, that I'm talking about our benefits, so 400G per lane, we're already working on the next generation technology.
We are receiving various inquiries from various customers. Up to PEC-4, we don't have other companies that have enrollment up to there. It's a one line, 5,000 units per month currently. If we make it two lines, three lines, four lines, it's going to multiply by two, three, four. Basically, the visible demand we have right now, we can fulfill that is what we think. The number one challenge is that, and also this actually is our benefit as well, the connection of optical and photonics and fibers connection at that speed, we have the strength there. I believe that that is the benefit and the difference with others. Thank you very much. Any questions from the remote participants? Please use your raise hand button function. Anyone? Okay. Anyone in the room? Okay. Please wait for the microphone.
Masuno from Nomura Securities, I have two questions. So FY23, innovative devices establishment. Since then, you said you will start the mass production in FY25 and JPY 200 billion sales in Q4, commercial shipping and mass production 27, you're saying. So is this a postponement of the timeline, or are you talking about something different? When the company was established, the image has changed or not? And my second question is NVIDIA Quantum switch. We don't know when their quantum switch will come in what volume, but looking at supply chain, there are 12 companies inside. And this time, you are working with Accton and Broadcom partnership. But are you thinking of a larger ecosystem going forward? Thank you. Okay. So when the company was established, we had that roadmap, the plan. But there were various circumstances. For example, the semiconductor crisis.
Our customers in FY22 and FY23, they bought our products in big volume, and it became their inventory. And so customers needed some time to clear their inventory. And that is why our sales dipped a little and was pushed back a little. And we are NTT subsidiary. So the U.S.-China issues was a factor. And so our advancement into Asia did not progress as much as we anticipated. And so the existing part did not progress as much as we expected. But the development of PEC-2 and PEC-3 are progressing as scheduled. So no change there. So I hope we could understand. Now, NVIDIA's partnership. Of course, we are aware of that. And the manufacturers, companies in that group are not saying that they are exclusive to anyone.
For example, the two companies we have here on stage, the two, including the end users, we are having various discussions. So we will continue working, discussing with NVIDIA and among others. And so that is not the only option for anyone. So our three companies are very close now. But going forward, the processors and end users, we will evolve and talk with many others. We are working right now. So I hope you could have high hopes. Thank you very much. Thank you. That's all for me.
I think to your question about NVIDIA Quantum and stuff, if I know correctly, Quantum refers more to InfiniBand than Ethernet. What I would say is what you're seeing here is a technology partnership on technologies like Ethernet, which are very open technology. And even the whole discussion about Broadcom versus NTT IOWN just kind of shows you the only way you're going to scale to hundreds of gigawatts of data centers being deployed is having an ecosystem that works around standard interfaces. Because otherwise, you cannot scale to the amount of opportunity that is out there. So we collectively, as an industry, have come to the conclusion, if you think about it, a few years ago, people said, if you want to build a large machine learning cluster, you have to build it with InfiniBand as a technology.
But today, the entire market is building them with Ethernet as a technology because Ethernet is the only one, which is open standards based. So that is the most important thing, I think, for you to take away is it is a very large market. And there's a massive transition that's going to happen from copper to optics. And this requires multiple companies work together. In some places, you compete. Some places, you're going to partner. But that has to happen for this market to explode. Otherwise, if you're vertically integrated and trying to do everything yourself, this market will not happen.
Thank you very much. If not, we would like to conclude this session for the panelists. Please give them a round of applause. Thank you very much. We would like to prepare for the next session. So we kindly ask you to wait a moment. We would like to resume at 5:15 P.M. Thank you.
[Foreign language].
[Foreign language].
[Foregin language].
So this is showing the data center's power consumption. The red line at the middle is showing Tokyo's total power consumption by 2030. With the advancement of AI, if this continues by 2030, the data center power consumption will exceed the total Tokyo's power consumption. If this trend continues, the rapid increase in electricity consumption driven by the expansion of AI usage could become so severe that it might impact our daily lives, potentially leading to situations like planned power outages. Another aspect is the rapidly increasing cost burden on users due to AI utilization. The graph on the left shows the annual AI-related budget amounts for companies. Costs for fiscal year 2025 are projected to increase by 75% next year. And while AI usage expands, concerns about declining ROI are also emerging. While the cost breakdown, please look at the graph on the right.
This is showing the AI adoption. This shows the composition of the AI market consisting of three major areas: consulting, application, and infrastructure, such as GPUs and data centers. Among this, infrastructure domain, including GPUs, accounts for the largest, over half of total. Computing infrastructure, such as GPUs, is driving the rapid increase in power consumption and cost growth, so in this sense, given the situation, two major points are required for the computing infrastructure supporting AI utilization. That is, reducing power consumption and costs through efficient operation and reducing the power consumption of the devices that make up the infrastructure itself. IOWN is a key component that enables the simultaneous achievement of AI utility, efficient operation, and reduced power consumptions. First, let me explain the efficient infrastructure operation. As AI usage expands into various business processes, changes are occurring in the utilization rates of computing resources like GPUs.
Even when building GPUs resourced individually for specific AI applications, as one done in the past when AI usage was limited, they function at relatively high utilization rates. However, as AI adoption expands across diverse industries and various business functions, such as finance, legal, and marketing, and when GPU is composed per how it is used, as shown on the right graph, it leads to inconsistency in utilization rate. In fact, when examining specific time periods, there are instances where only 50% of the total GPU resources are utilized. Furthermore, with the increasing demand for computing infrastructure like GPUs, power supply is reaching its limits, especially metropolitan areas. This necessitates distributing GPUs to regional data centers with surplus power capacity.
While the ideal would be to efficiently operate GPU resources for various AI applications within a single cluster at one location, the reality is that power supply limitations are driving the decentralized deployment of computing infrastructures like GPUs. Leading companies are already utilizing multiple large-scale AI models, such as generative AI image analysis and real-time processing, alongside the expanding scope of AI applications. Consequently, thousands of GPU resources are required for AI training and inference. Consequently, companies have already distributed GPU installations and are visualizing GPU usage costs and power consumptions at each site and are beginning initiatives to efficiently manage distributed GPU resources, such as assigning more demanding AI processing to locations with surplus power or GPU resources.
The first requirement for computing infrastructure supporting the AI era, which is efficient infrastructure operation, is achieved by connecting the vast distributed GPU resources at high speed via IOWN APN, effectively creating a single massive computing platform virtually. Depending on the application and usage conditions, it flexibly optimizes the allocation of distributed computing resources like GPU and assigns resources to sites capable of supplying the necessary power based on GPU load and power consumption, ensuring stable and efficient operations. Next, I will address another requirement: reducing the power consumption of the equipment that constitutes the infrastructure itself. As AI processing volume increases, the number required GPUs also increases, leading to a larger capacity of internal computer communications. The bottom diagram shows the configuration of NTT DATA's GPU as a service used as the training environment for NTT's LLM tsuzumi.
It may be difficult to see it significantly scaled down like this, but you can probably imagine the large amounts of communication occurring within the computer from the numerous capable wiring. The diagram on the right, which was also shown in Ms. Hoshino's explanation at the beginning, presents communication volume, the vertical axis, and communication distance on the horizontal axis. As you can see, communication exceeding 10 Tb per second, even within a computer where components are extremely close together, faces barriers. Increased power consumption and heat generation and electrical communication have reached its limits, necessitating a shift to optical communications. While the processing demands AI usage and the increase in GPUs, internal computer communications have already reached the terabyte level. Electrical methods have reached their limit, making time for IOWN photonics devices to step up.
The second requirement for computing infrastructure supporting the AI is reducing power consumption, as the infrastructure itself has shown in the diagram on the right. Implementing this PEC device, PEC-2, in this switch is connecting components within the computer, enables both increased communication capacity and reduced power consumption. The utilization of AI is steadily progressing. The line graph here shows that since 2023, in November, NTT announced its LLM model. The number of inquiries from domestic companies regarding tsuzumi and AI-related matters have been steadily increasing, exceeding 1,800 cumulative cases. Additionally, the bar graph shows that orders related to generative AI have exceeded 1,800 cases combined domestically and internationally. Alongside this steady progress, inquiries regarding the construction and operation of computing infrastructure supporting AI utilization, such as data centers, cloud services, and GPU as a service, are also increasing.
The NTT group provides GPU as a service, a managed service that delivers the necessary computing resources at the required time and scale tailored to specific AI usage scenarios. The ability to deliver secure and efficient operation and provision of GPU servers, storage, resource management, security, and other resources required for AI utilization efficiently and securely is leading to the previously shown screen of AI orders. The mobility AI platform we jointly announced with Toyota last October to realize society with zero traffic accidents requires massive GPU resources. In addition to Toyota's existing computing resources, we plan to build NTT DATA's new GPU as a service and deploy it in a distributed manner with an efficient operation, and power reductions are going to become key.
To sustainably and widely deploy the benefits of AI, the computing infrastructure for the AI era must have efficient operation and reduce power consumption, connecting the increasingly distributed GPU resources via IOWN APN and with the low power consumption device, the PEC switch. We will take the challenge of reducing the power consumption of computing infrastructure as well. Thank you very much for your kind attention. Thank you very much, Oonishi-san. We will now have an overall Q&A session. Please wait till we get ready. Thank you.
[Foreign language]
Thank you for waiting. We will now have the overall Q&A session. President Shimada, Senior Executive Vice Presidents, Mr. Tomizawa, Mr. Hoshino, Oonishi EVP, and Kinoshita EVP will answer your questions. Please come up to the stage.
[Foreign language]
So I will explain how you can ask your questions.
First of all, we will take questions from the media in this room, followed by the investors. And then we will take questions from those attending the web conference system remotely. Please raise your hand if you have any questions. In this room, we will bring you a microphone. And if you have questions from the remote sites, please use the raise hand button. And if you want to take down your question, please push the raise hand button again. When we appoint the questioner, please state your name and company name. And we will ask you to unmute. So please unmute your button on the web conferencing system. After your question, please mute yourself until our response is completed. First of all, we will take questions from the media in the room. Please raise your hand. Thank you. So the gentleman at the very end, please.
Nikkei BP. Horikoshi is my name. So Broadcom mentioned regarded highly the NTT IOWN team's capability. That was very impressive. So I have three questions related to that. So NTT is a device manufacturer, Broadcom, ASIC, and Accton manufacturing. So you will manufacture on a supply chain. First, you said your first goal or target is the hyperscalers. Tomizawa, who will go and sell? Will the entire team promote the product or someone else? So that's my first question. Second question, if one hyperscaler is adopted, you mentioned 5,000 pieces a month. PEC manufacturing capability capacity, and you may run short of your capacity. So how are you preparing your production increase, or how much excess capacity do you have now? And lastly, how will NTT's profit increase going forward? I want to have a clear image on that as a device manufacturer.
If the final product is shipped in a big volume, the revenue will rise, I understand. But are there any other revenue streams? Thank you. Those are my three questions. Thank you. Shimada would like to answer that question. First of all, today we had Broadcom and Accton and NTT. The three parties are forming a supply chain. All three companies will do sales and marketing activities. Of course, we do the overall coordination and have started contacting various customers. But with both partners, we have various customer channels. So we are leveraging each other's customer channels. Next is about our production line. As you rightly said, 5,000 pieces a month. But we can have three shifts and 24/7 operation. It will be practically automated. The machine will do the production, the manufacturing. So with that, we can triple the production volume.
We're thinking of increasing the production line. For the time being, we have already planned to increase to two and three lines. Next year, the second line will be up and running. So according to the customer's demand, we will do sufficient investment. Next, profitability. So as we mentioned today, the market is not fully formed yet. It is a new market. So it's difficult to mention the level of revenue we can achieve. We have not finalized the customers, and the unit price has not been set yet. And the pilot users' revenue is only a guess. And the optical interconnect market will be JPY 7 trillion in 2032. And our PEC device is not included in that at all. So the conventional metal wiring will be replaced by optical. Copper will be replaced by photonics. And we don't know the size of the copper market.
And so how many people have interest and what the market size looks like will be elucidated through the market research company survey. So on that basis, we will work on getting the market share. We will have a better understanding and make our move. So today, I'd like to refrain from quantifying the market size. Thank you. Thank you.
[Foreign language].
The next, the gentleman in the front row. My name is Abe from TV Tokyo. I would like to ask you, Mr. Shimada, a question. The first question is related to the development situation of IOWN, the overall progress from your perspective. How are you looking at it? And there are an increasing number of competitors regarding the photonics electronics convergence technology. But in 2025, commercialization, it seems that the third generation part is a bit delayed than originally planned. So the progress on this, how are you?
What is your view on that? Regarding the progress itself, basically, I think it's progressing as planned. I don't think it's behind at all. Probably, Mr. Tomizawa was talking about the inventory regarding that is related to the existing products. So regarding this PEC products, it's actually moving faster than planned. And for APN, steadily at the Osaka Kansai Expo, and we connected about 29 sites through APN. And the World Athletics Competition, TBS, from the remotely connected all to Akasaka, and they used that for the remote connected production. So with this, we can see that the usage is steadily increasing. As the overall IOWN project, it is progressing as planned is how I look at it. And the most important point of IOWN is how to reduce the power consumption and create a society with a lower power consumption.
What we have explained today, which is the PEC device, is at the core of this. Therefore, next year, we would like to surely launch PEC-2 and for 2027, 2028, bring it to those years and come out with another device is what's important. Today, at least one we wanted to convey is that NTT is not alone doing this, but rather globally, we have sure partners out there that will work with us. We are making progress is what we were able to show you today. Thank you very much. One more question. To move forward, this IOWN concept, the collaboration with the Japanese government is indispensable at all is what I think. Ms. Takaichi is now the president of Japan's LDP, and she used to be the former minister of MIC. She is a politician that does have a relationship with NTT already.
With the new prime minister coming in, what are your perspectives? Ms. Takaichi, since she was the former minister of MIC, I've known her. The PEC device that we are developing and within the semiconductor world, Japan to once again come back as a main player, I believe that we will be able to receive good support. Well, together with the government, in fact, the R&D related to IOWN, we are receiving support from the government already. With receiving additional support, we would like to come up with products that have a share content and contribute to the society. Thank you.
[Foreign language].
Now we will take questions from the investors in this room. Any investors with questions? The front row, Morgan Stanley, Izuka is my name. Thank you.
So first of all, IOWN device, the one you showed us. Broadcom uses Tomahawk 6 and sells the switch. So this clear product output is there. And so it is convincing, I think, that this can be launched. And in the next generation, it will be embedded in chips. So it's not the final product. The chip manufacturers who are used in the final product need to adopt your product. So it will be a higher hurdle, I think. And then next, the depth of collaboration will deepen, I think, and the difficulty will rise, I think. That is the image I have. Am I correct or wrong? If you could give me your insight. Yes, Tomizawa would like to answer that question. So the level of difficulty will rise. Yes, you're right.
Because of that, we decided to collaborate in PEC-2 and use that as a foundation to go. That's step one. In reality, in chips, the PEC device will come into the chips. In METI, we have been interested in the national project. In that formation, the logic manufacturers are giving us cooperation and advice. This discussion is ongoing. It has already started. To make this large scale, global chip manufacturers, and you need to collaborate. In each area, will it just be a niche logic area, or are you thinking of scaling up even further? This is a device. We need volume to make sense, to make business sense. We want to work with the number one or two logic manufacturers in the world.
Initially, the volume may not be large, but in the end, we want to aim for the mass. Thank you very much. And my next question is for NTT as a whole. We will sell AI. So what will the structure, NTT structure, be? The order is now ramping up, you said. So NTT East and West is there, and NTT DOCOMO has DOCOMO business, and then NTT DATA and the global business. So tsuzumi and unique products to global AI products. You have a broad product lineup. So the corporate enterprise business sales formation is suitable, or will there be a difference, a change in the way you will sell to the enterprise? Okay, let me answer that question. Now we have DATA and DOCOMO business and East and West. They all have AI sales posted already. And the customers are slightly different from one another.
NTT East and West, many are local governments. Local governments are using this in many cases. The types of AI used, one is tsuzumi, but we're not planning to do tsuzumi alone. We want the AI solution that is most suitable for the customers. That's what we propose to our customers. Customers who want and need tsuzumi are NTT DATA customers, for example, where they want to process data in a closed environment, so now the data is requested by customers that we are dealing with. We will be announcing this soon, but we will release tsuzumi 2 this month, so with the current account, the revenue is rising in a balanced manner, so I think the current formation is appropriate, and Oonishi's area is overseeing the overall structure, and we have each responsibility in developing the current technology, so I think the current formation is appropriate.
Anything else to add? So DATA and DOCOMO business and East and West have their respective customers, and the AI solution that is necessary or the infrastructure that is needed are shared as needed. But each operating company offers the appropriate technology to each customer. Thank you. Thank you. Any other investors from the room? Other gentlemen in the back of the room, please. Tokunaga from Daiwa Securities. Just one question. When the IOWN Global Forum was established as NTT, Intel, and Sony, three companies were the founding companies. It was my understanding. And today, the collaboration with Broadcom was featured. But the Intel, what's the collaboration? What is the progress? Then it seems that Intel is also collaborating or having alliance with SoftBank. So IOWN and Intel, can you share with me the situation regarding that? Kinoshita would like to answer that question.
Currently, around PEC-2, we don't have a direct collaboration. But the NIC or the internet's network's interface card, we still have the continuing collaboration.
[Foreign language].
So we will take those from the online. Anyone participating online with questions, please let us know with the button. Are we okay? So SMBC Nikko Securities, Kikuchi-san, please. If you could unmute yourself, please. If you could unmute. We are sending the request to unmute. Oh, yes, this is Kikuchi speaking. Sorry about that. So this is IR Day. So we want more information from the investor perspective. So in terms of CapEx, what is your plan? Who is the main producer production? Is this the joint venture, or? And what level of investment will NTT do, and where will that fund come from? So Shimada will start off. So for investment, Innovative Devices will increase the production line.
And our stake, and if necessary, we want to increase our capital, increase our investment. But innovative devices will do maybe three or four lines at most. And beyond that, we're still discussing with Tomizawa-san. We still have some open land next to where innovative devices are. So we can perhaps in Mito, we can expand the factory. And the back end of the process, it's like semiconductor back end of the process. So we can work with partners like Shinko Electric. We can produce together with Shinko. So we are exploring multiple options. This will require investment. But the investment will be needed in phases. So for the time being, innovative devices can cover the required production volume. Tomizawa-san, yes, that is right. So 5,000 pieces a month per line. How much investment per line should I expect?
Sorry, I don't have the numbers off the top of my head. So if I could get back to you later. Yes, thank you. And my second question is, so sales, revenue is still uncertain, but in Osaka Kansai Expo, I think you had various business negotiations. Did you have good traction for possible revenue? So Q4 next year is October, December quarter, or? So in October, December quarter next year, you have a good prospect of recording revenue. Basically, it was not the negotiation at the expo site. So what we did in Expo was well received by the customers and visitors, but the pavilion and the technology, how cutting-edge the technology is, that needs to be evaluated. So like Tomizawa-san said, large usage. We want to get customers who can use this in big volume.
I cannot name names, but a few hyperscalers and neocloud players are having business negotiation with us. And that is where we want to expand our business from. Thank you. My third question is, in the annual IR Day, we asked many questions from the investor's point of view. This has nothing to do with IOWN, but what the investors are now worried about is DOCOMO's Q1. Is that okay? And Q2 and onward, how DOCOMO will recover? That is our concern. On the other hand, on the data center side, you launched and the share price is moving, the REIT, including utilization of REIT, your future data center business using REIT. Are you seeing any drivers of the business, or are you being a bit conservative this year? Financial results and the shareholder return.
Because this is IR day, I want you to give us information that is convincing to the investors. Thank you. Okay, so we'll ask you a question first. So what about DOCOMO should I answer specifically? The full year results briefing and Q1 financial results briefing had some gaps in what you said. And in the results briefing, it seems that you did not explain clearly. Maybe I'm feeling different, but the general direction remains unchanged, I think. But from Shimada-san, from your point of view, DOCOMO is operating sufficiently, do you think? Today, Sumishin SBI and NTT DATA, we were expecting some comments there. So Sumishin SBI included, we want some comments and NTT DATA and data center. I'm sorry, I'm being a bit greedy here, but that competition itself is becoming quite severe. The competition environment is probably becoming more severe than what was originally expected.
But to DOCOMO, we are telling them to respond to the competition. So of course, what that means is that the marketing costs will increase and there will be a cost push. However, on the other hand, they work on cost reduction initiatives or initiatives to improve efficiency or sell the assets that they have. So that will enable them to have the resource to respond to the competition. And through that, win through this competitive environment from before telling them to not drop their market share further. So DOCOMO themselves are thinking that as well. Having said that, the commitment to the investors we have to keep is what I think. So in the next six months, we would like to continue to respond in that way.
And for the SBI Sumishin Net Bank, currently the new business is under consideration with them before we actually launch the brand and SBI Net Bank. And we would like to give you the detailed explanations regarding that, but please give us time to do that. And regarding NTT DATA, the discussions of the synergy that will be generated and also within the NTT Group, the formation of the group companies, it's not a major change of the formation, but maybe their business is just better to transfer over to NTT DATA. We're in the midst of such discussions. So it seems that it is going to slightly require some more time. Therefore, from the fall season to the winter season, I was saying that we would like to share some initiatives and provide the explanation of that.
But currently, it seems that it may be a bit behind schedule than what I originally said. So the medium-term strategy of NTT DATA itself, it will be close for them to come up with the next medium-term management strategy. So within being able to convey to you the formation, and then through that explanation, we're hoping we'll be able to provide you the explanation of the synergies that will be generated. That's all. Thank you very much. Thank you.
[Foreign language].
Thank you very much. Due to the time restriction, we would like to make the next question the last one. Please go ahead. [Foreign language] my name is Takatsuki. Today, the explanation was mainly focused on IOWN 2.0. When you think about the future demand growth of hyperscalers, the IOWN 3.0, how are you going to develop that, and how are you going to sell that?
The mass production formation for IOWN 2.0 was explained. Regarding 3.0, which you can expect a larger demand, how are you going to respond to that? At the end, towards you, Mr. Shimada, the NTT businesses, IOWN, there was a strong impression that the telecommunications business, but now it seems that you're expanding wings to the semiconductor area as well. Can you comment on spreading your wings further or expanding your business scope? Regarding IOWN 3.0, in what way the manufacturing will be done is not determined yet. However, at least Furukawa Electric that we are collaborating with them for the national project, they are the indispensable partner that we need to continue the collaboration.
So, Nikkei, before you had an article out there about something which I will not deny here, but when it comes to 3.0 stage, what kind of a supply chain will establish is something that we will consider moving forward. But regarding the technology development, we will be collaborating with Furukawa, and that is something that is indispensable. So the production itself, what are we going to do about that is something that we're going to review further and then share with you at an appropriate timing. Thank you. Thank you. So now it is time to close the Q&A session. So Mr. Shimada, Mr. Hoshino, Mr. Tomizawa, Ms. Oonishi, and Mr. Kinoshita, thank you very much. Thank you very much for giving us so many questions. With this, we would like to conclude the scheduled program.
Thank you very much for participating in the NTT IR Day Toward the Future with IOWN. We seek your continuous support and understanding. Thank you.