Fujitsu Limited (TYO:6702)
Japan flag Japan · Delayed Price · Currency is JPY
3,075.00
-105.00 (-3.30%)
May 1, 2026, 3:30 PM JST
← View all transcripts

Status Update

Jun 4, 2024

Operator

Welcome to the Fujitsu Research Strategy Briefing Session. The speakers are as follows: The first speaker is Toshihiro Sonoda, Head of Artificial Intelligence Laboratory. The title of his talk is Research Strategies in the Field of AI: Enterprise Generative AI Framework.

Toshihiro Sonoda
Head of Artificial Intelligence Laboratory, Fujitsu

I am Toshihiro Sonoda. I head Fujitsu AI Laboratory. Today, I would like to explain research strategy in the field of AI. Generative AI is highly anticipated as a key technology in business transformation. These are two streams of generative AI. One is a large-scale general purpose model, such as a GPT, which is published in the cloud, and it's intended for general use. The other is a small to medium-sized specialized model, which is intended to solve enterprise-specific problems. Fujitsu will focus on the market for specialized model to meet corporate needs. There are still challenges with a specialized model, especially in dealing with diverse and massive amount of enterprise data, flexible customization, and governance. I mean that it is difficult for us to make generative AI compliant with the corporate rules and regulations.

Fujitsu declares that it aims to become a global top player in helping enterprise adopt generative AI by providing Fujitsu Enterprise Generative AI Framework that solve these challenges. Fujitsu provided generative AI environment to all 124,000 employees and the practice in-house. We have also been making conversational generative AI for enterprise publicly available on Kozuchi. We also develop a large language model, Fugaku- LLM, on supercomputer, Fugaku. 30 billion parameters of Fugaku LLM were trained from scratch with proprietary data. Fujitsu was responsible for speeding of communication and computations, as well as pre-training and subsequent fine-tuning that are important technology to create specialized generative AI for enterprises. As I mentioned earlier, Fujitsu generative AI technology strategy is to build and deliver, enterprise generative AI framework. These are the three key technology that will be important in enabling this framework.

Knowledge Graph Extended RAG solves the handling of large and diverse enterprise data formats. This makes it possible to handle information that accurately captures the relationship and the connection between words in a sentence from the large sample of text of more than 10 million characters. Amalgamation Technology provides flexible customization for generative AI to adapt to changing the corporate needs. Generative AI Audit Technology eliminates concern about using generative AI in companies by controlling the behavior of generative AI. This page indicates how Fujitsu Enterprise Generative AI Framework works. Fujitsu Generative AI Framework that can flexibly respond to corporate needs and complying with corporate rules and regulations, works in tandem with Knowledge Graph Extended RAG, Amalgamation Technology, and Generative AI Audit Technology. Of course, each technology can work individually. There are two steps to utilize this framework.

One is to prepare a knowledge graph from diverse and large-scale corporate data like product manual, software design documentations, surveillance video, network log, and documentation about laws and regulations. The other is runtime, when a customer uses it. I would like to explain the detail of these three technologies from here. Knowledge Graph Extended RAG sequentially processes large-scale data such as product manuals, network logs, and surveillance videos handled by companies to create knowledge graph. It assists in the inference of generative AI by extracting information from this knowledge graph that enable generative AI to derive correct answers in response to user queries. This latest proprietary technology enable this handling of large-scale data.... We have achieved the first place in the world in the HotpotQA benchmark that measure the accuracy of complex question answering. Traditionally, LLM can handle about 300,000 characters at maximum.

Conventional RAG can assist generative AI in inference by taking fragments of sentences, called chunks, that are relevant to a query and providing them to generative AI. But it cannot handle relationships across multiple parts of a sentence and a broader view of the data, resulting in less accurate inference. Knowledge Graph Extended RAG can now accurately extract only the information needed by generative AI by preparing a graph schema for the generative AI to correctly derive the answer to the user's questions. By retrieving the information corresponding to the schema from the knowledge graph, this enables generative AI to accurately infer by fewer information rather than when we use conventional RAG. This time, it has been confirmed that it can accurately process large amount of data exceeding 10 million characters, and in actual benchmark, it has achieved accuracy that exceeds that of GPT-4.

This technology also reduces the input context of LLM, thereby reducing the cost. A specific example is a Q&A in the product manual. Second one is network log analysis. It can streamline failure recovery by listing potential network failures. The last one is worker analysis through video. It is possible to check the long-term situation of workers from the video on the work site. Amalgamation technology is a technique for creating the specialized generative AI, while reducing the amount of work required for prompt engineering and fine-tuning, and achieving the high accuracy. First, it extracts query characteristics from the user's query, then select and combine the AI models required to execute the task from the characteristic of query and the model, and automatically generate the execution pipeline required to the process the task.

In addition to this function, amalgamation technology has a function to automatically create appropriate AI model, even if it doesn't have one. It extracts the model characteristic at that time, so that it can select that model that fits user query. It has achieved the same level accuracy of video detection as a GPT-4V, and the highest performance in the Japanese open model. As an example, we will demonstrate three cases: contract compliance checks in-house, improving support desk efficiency by streamlining ServiceNow incident allocations, and optimal driver allocation, which we are working on with Nakayama Transportation. Finally, our generative AI audit technology addresses the challenge that using generative AI in the enterprise is difficult due to the lack of full control over how it works. Generative AI audit technology uses knowledge graph of laws, or regulations, and internal rules to verify the conformity of input and output.

By analyzing the grounds for judgment derived by generative AI, explanations and hallucination judgment become possible, and output can be audited. This verification function is complicated, so I would like to explain it with a concrete example. Here, I will explain it by using an example of checking whether the input image complying with the Road Traffic Act. You request generative AI to determine whether the input conform to the rules. In this case, the rules extract from the rule knowledge graph based on, your request. Generative AI determines that a bicycle over the pathway must run on the left side on the roadway, but the problem is that there is no way to verify that the generative AI's judgment is correct. This technology first analyzes the ground for how the generative AI made its decision....

In this case, woman riding on a bicycle is marked as a location that the generative AI paid attention to, and this is recognized as grounds for the generative AI's judgment. If this judgment grounds match the output, the violation of the rule in the input can be determined to be correct. If there is a contradiction, it can cause hallucination, so it is corrected. In this way, generative AI audit technologies aims to enable companies to use generative AI with confidence by being able to correctly determine whether there is a rule violation. The Fujitsu Enterprise Generative AI Framework, incorporating the three technologies we have described so far, is scheduled to be released as part of Fujitsu Kozuchi lineup in July 2024.

As stated at the beginning, Fujitsu is committed to continually meeting the needs of companies in utilize generative AI, and by providing Fujitsu Enterprise Generative AI Framework, we aim to become a world leader in solving and supporting the challenge of utilize generative AI. That brings me to the end of my presentation. Thank you for your attention.

Operator

The second speaker is Naoki Shinjo, SVP, Head of Advanced Technology Development Unit. The title of his talk is "Processor for Next-Generation Data Centers, FUJITSU-MONAKA.

Naoki Shinjo
SVP and Head of Advanced Technology Development Unit, Fujitsu

Hello, my name is Naoki Shinjo, the Head of Advanced Technology Development Unit in Fujitsu Research. Today, I would like to present our next-generation, high-performance, energy-efficient ARM architecture processor, FUJITSU-MONAKA. While global warming is a real risk today, data center power consumption has become a major issue due to the increasing use of AI and big data. Reducing power consumption in data center is an urgent issue. To address this problem, Fujitsu is the only company in Japan that is able to develop high-end processors, and is developing a high-performance power-saving processor called FUJITSU-MONAKA, with the support of a national project by NEDO. FUJITSU-MONAKA is the world's first processor to use state-of-the-art 2 nanometers technology and ARM's SVE, Scalable Vector Extension, and 144 cores provide the fast data processing infrastructure required for AI and HPC.

Fujitsu's unique ultra-low power technology provides both low power and high performance. In addition, the confidential computing architecture provides advanced usability and security features. It also offers world-standard DDR memory, PCI Express Gen 6, and air-cooled features that make it versatile and easy to use. FUJITSU-MONAKA has started to co-create with various fields, with the aim of utilizing it in various applications. The main market is the data center market, in addition to security and telecom. We will respond to the growing demand for AI in the data center market by pursuing AI performance and expanding AI software. Repeating simple computations, such as deep learning, is the GPU's fault, but we believe that the CPU can be especially useful for inference and machine learning that requires complex computations. Today, I will briefly explain the software expansion activities.

FUJITSU-MONAKA covers a wide range of software, including AI and HPC, machine learning, deep learning, big data analytics, and data security. In particular, we will cooperate with the open source software community and contribute to the development of software that will bring out the performance of FUJITSU-MONAKA before the hardware deployment, so that it can be used immediately after. Here are some of the software development activities. Fujitsu is leading the development and application of unified acceleration technology with the launch of the UXL Foundation, an organization dedicated to enabling various CPUs and accelerators to be used in a single source code.

Currently, accelerators such as GPUs are used in vendor-specific interfaces that require rewriting the source code for each piece of hardware, but unified acceleration technologies eliminate the need for rewriting the source code, reduces the cost of environment migration, and allows users to select the most appropriate hardware. Fujitsu also became the first company in the world to successfully enable the oneDAL, oneAPI data analytics library in the ARM. The oneDAL has some dependencies on the Intel MKL and only worked on x86 CPUs, but replacing that with OpenBLAS and enabling it in ARM, we were able to run oneDAL in ARM. In this way, we will expand ARM enablement and build a foundation for ARM solution development.

As I explained, FUJITSU-MONAKA and its optimized AI software will support Fujitsu Kozuchi as an AI infrastructure platform that can be used in a wide range of fields to solve customer problems. Thank you very much.

Operator

The third speaker is Seishi Okamoto, Corporate Executive Officer, EVP, Head of Fujitsu Research. The title of his talk is Fujitsu's Research Strategy: Creating New Value by Combining Technology Areas.

Seishi Okamoto
Corporate Executive Officer, EVP, and Head of Fujitsu Research, Fujitsu

I'm Okamoto, Head of Fujitsu Research. I would like to talk about how we are creating new value by combining technology areas. In order to build a sustainable society, we have advancing research and development, focusing on five key technologies. As Sonoda explained, our AI strategy and Shinjo showed the roadmap for development of FUJITSU-MONAKA just before, we are aiming to create new value by combining AI and other four technologies, such as fusion between AI and computing area. I'm convinced that Fujitsu's strongest point is the capability to combine AI with our world-leading supercomputing technology that we have cultivated over the years, and our cutting-edge quantum computing technology. Let me start with the theme about the fusion between AI and the computing area. AI itself is a core technology that can solve a variety of social problems. However, AI is also posing new social issues.

For example, 10% of all electricity generated in the world will be consumed at data center in 2030, and most of the electricity in the data centers is consumed for AI system operation. Therefore, we can say that the development of AI is directly linked to the global electricity problem. Now, GPUs are being used for AI calculations. With this background in mind, we developed a new technology to fully utilize GPUs up to 100%. We hear that even the supercomputer, Tsubame, has a GPU usage rate of 30%, although various efficiency enhancement measures are taken currently. Therefore, we can say that our new technology is an amazing breakthrough over conventional technologies. By advancing R&D on this technology, we can reduce the number of GPUs, and we hope to reduce the power consumption in the world by half.

According to our calculations, it is equivalent to the annual electricity consumption by about 24 million householders in Japan. Next, let me talk about fusion between AI and data security area. As reported as a World Economic Forum, we can say that this information brought about by generative AI is becoming a serious problem and posing unprecedented social risks. To address this threat, we are working for international governance formation through activities such as the Hiroshima AI Process, which was declared as a G7 Hiroshima Summit last year, while following the AI guidelines for business compiled by Japanese government. As for the measures against disinformation, which is especially becoming a serious international problem among the risks posed by generative AI, we are developing the world's first system to totally judge the authenticity of information, whether it is misinformation or disinformation.

Although AI is a core and driving technology for solving social issues, we don't think we can solve complex problems just with AI. Here, we introduce our converging technologies that can produce significant effects for solving social issues by combining AI and humanities and social sciences. We are conducting R&D on Social Digital Twin, which is a technology to automatically create and propose net positive development measures for environment, society, and economy through fusion between AI and humanities and social sciences. Around the world, we are advancing field trials in various fields, such as mobility, energy, environment, disaster prevention, and crime prevention, and wellbeing.... It is said that the next generation of AI will become closer to the human and go beyond the limits of current AI capability. We expect that exponentially fast quantum computing power will revolutionize the world of AI.

Therefore, we are pursuing R&D of the next generation AI by fusion with quantum computers. In a field called quantum machine learning, Fujitsu has developed many of the world-first technologies. For example, the left figure on this slide shows the world's fastest Quantum Convolutional Neural Network technology. We succeeded in accelerating calculation speed by streamlining the calculation process, which is better than traditional methods. Then, please look at the right figure on the slide. As you know well, noise is a big problem in quantum computing. Against this problem, we developed the world's first technology to restore the noise-free original data by removing noise with a technology called a Quantum Encoder. We will open up a new world of AI with quantum machine learning technology. I would like to conclude my presentation by repeating our research strategy.

Fujitsu will strive to create new value by combining technology areas centered on AI, which is our special method, leveraging our technological strengths. In doing so, we would like to contribute to building a sustainable society. Thank you very much for your attention.

Powered by