Good morning, afternoon, and evening. Thank you for joining SK hynix 2025 Q3 earnings release conference call. Following SK hynix presentation, there will be a Q&A session. If you have a question, please press star and one on your phone. Please note that presentations will be interpreted simultaneously and Q&A session consecutively. With that, we are now ready to begin.
Good morning, afternoon, and evening. This is Park Seong-Hwan, Head of IR at SK hynix. Welcome to the SK hynix 2025 third quarter earnings release conference call. Allow me to introduce the executives present here with me today. We're joined by Chief Financial Officer Kim Woo-Hyun, Head of DRAM Marketing Kim Kyu-Hyun, Head of NAND Marketing Kim Seok, and Head of HBM Sales and Marketing Kim Ki-tae. Let me issue a disclaimer that all outlooks presented by the company are subject to change depending on the macroeconomic and market circumstances.
With that, we will now begin SK hynix earnings release conference call for third quarter of 2025. CFO Mr. Kim Woo-Hyun will first present the earnings, followed by the company's future plans and market outlook, and a Q&A session with the attending executives. Good morning, everyone. Allow me to first introduce the company's performance for the third quarter of 2025. Earlier in the year, we expected more moderate demand conditions in Q3 due to external uncertainties and impact of some preemptive purchases in the first half. However, we ended up witnessing a highly favorable market environment with a spike in demand for memory products for servers, including HBM, driven by surging AI infrastructure investments by Big Tech companies. First quarter revenue again recorded record quarterly revenue of KRW 24.4 trillion, up 10% QoQ and 39% YoY.
This was driven by stronger DRAM and NAND pricing, as well as increase in DRAM shipments from rising demand. DRAM's bit shipments exceeded guidance by increasing high single digits sequentially, driven by growing sales of HBM3E 12-high products and server DDR5 to support AI demand, as well as seasonal demand recovery for LPDDR5 products. In particular, shipments of high density DDR5 modules of over 128 GB doubled QoQ for two quarters in a row, clearly demonstrating robust growth in HPC-related DRAM demand. ASP rose by mid single digit QoQ, with strong ASP growth for conventional DRAM products. For NAND, bit shipments decreased by mid single digit QoQ, given high base from the previous quarter, but enterprise SSD shipments grew by double digits amid rising demand from AI servers.
ASP increased by low 10% compared to previous quarter, supported by NAND price recovery and higher mix of enterprise SSDs with pricing premium. Operating profit reached KRW 11.4 trillion, up 24% QoQ and 62% YoY, also marking an all-time high. Operating margin improved by five percentage points QoQ and seven percentage points YoY to 47%, driven by strong sales of leading-edge products such as HBM, high performance DRAM, and enterprise SSD. This marks the first time in the company's history that quarterly operating profit has exceeded KRW 10 trillion. Depreciation and amortization expenses in Q3 were KRW 3.6 trillion, resulting in EBITDA of KRW 14.9 trillion and an EBITDA margin of 61%.
Non-operating income net of expenses was KRW 3.4 trillion, including KRW 0.21 trillion of foreign currency related net gain due to stronger U.S. dollar at the end of the quarter, and KRW 3.3 trillion of valuation gains on investment assets. Pre-tax income was KRW 14.8 trillion, net income was KRW 12.6 trillion, and net profit margin stood at 52%, again reaching record-high level. At the end of Q3, cash and cash equivalents stood at KRW 27.9 trillion, up KRW 10.9 trillion from that of last quarter. Interest-bearing debt that increased by KRW 2.2 trillion to KRW 24.1 trillion, resulting in a net cash position of KRW 3.8 trillion. Accordingly, debt-to-equity ratio improved by one percentage point QoQ to 24%. Now, let me share our market outlook.
In 2025, despite ongoing geopolitical and macroeconomic uncertainties such as tariffs, the memory market saw mixed expectations, optimism regarding explosive AI growth alongside concerns about monetization. Recently, however, global investments in AI infrastructure have become the top priority for AI market expansion, driving significant demand growth not only for HBM but for broader memory demand such as DRAM for general purpose servers and enterprise SSDs. The AI market is now shifting rapidly from the training phase of large models to the inference phase where users actively utilize AI services. As AI models evolve into multimodal forms and inference based AI services spread across industries, the number of concurrent users and the need for faster, more accurate responses are rising dramatically, causing an exponential increase in the number of tokens processed.
While the training stage primarily involved computation on large scale AI servers, the evolution toward inference requires handling vast numbers of tokens with no latency. This is driving efforts to distribute computing workloads across not only AI servers but a variety of infrastructures such as general purpose servers and edge devices. KV cache, or Key-V alue cache, which is intermediate computation results generated during inference, grows proportionally with context length. When HBM alone cannot store all of this data, it is offloaded sequentially to conventional DRAM and SSDs. With AI systems processing multiple user requests in parallel, the combination of growing output tokens and longer context windows is exponentially increasing memory usage during inference. As a result, expansion of the AI inference market is driving demand not only for HBM and high performance DDR5, but also for enterprise SSDs, signaling a structural shift in both DRAM and NAND demand.
Leading AI companies are now accelerating investments backed by monetization and forming strategic partnerships to support this growth. This trend will lead to further expansion of AI data centers, creating robust demand across a wide range of memory products, from HBM to conventional DRAM and NAND. Meanwhile, the smartphone and PC markets are expected to show moderate growth, reflecting ongoing inflationary and macroeconomic uncertainty. However, as users increasingly experience on-device AI, AI functionality is spreading even to low- and mid-end smartphones, while AI PCs are expected to account for over half of total PC market. Therefore, we expect content growth will continue to drive memory demand for consumer applications. Reflecting such demand environment, DRAM demand growth is expected to rise from high 10% this year to over 20% next year, while NAND device growth is projected to improve from mid 10% this year to high 10% in 2026.
Next, I will discuss the company's plans. In the fourth quarter, we plan to continue to expand sales of HBM, server DRAM, and enterprise SSD. However, considering our normalized levels of inventories, we expect DRAM and NAND bit shipments to increase by low-single-digit QoQ for both products. For HBM, we have completed discussions with key customers for next year's HBM supply. Our HBM4, which we have completed development and mass-production preparation in September, not only fully meets customer performance requirements but also supports highest speed in the industry. We will start HBM4 shipments in Q4 this year with further expansion planned for 2026, reinforcing our leadership position in the HBM market. For conventional DRAM, we plan to meet increasing customer demand by securing a full lineup of the most advanced 1c-nanometer-based products across server, mobile, and graphic segments.
Mass production of 1c-nanometer is already ongoing smoothly, and we plan to accelerate migration in 2026 to maintain our technology and cost leadership. For NAND, where demand recovery has been slower, we are deploying world's highest 321-layer technology on various solution products to be ready when market conditions improve. We will also focus on supporting the growing enterprise SSD demand, all the while continuing to operate with a profitability-focused approach. Furthermore, in line with growing demand opportunities in AI servers, we are investing in tech migration to expand supply of both TLC and QLC products based on the 321-layer platform next year. Meanwhile, we have secured customer demand across all DRAM and NAND products, including HBM, through next year.
While we are currently doing our utmost to meet customer demand, AI memory demand is significantly exceeding expectations, and this trend is expected to continue for the foreseeable future. To respond swiftly, we have recently opened the cleanroom ahead of schedule at M15X and begun equipment installation to rapidly secure new capacity. For conventional DRAM and NAND, we will accelerate the transition of existing capacity to advanced nodes to ensure robust responsiveness to rising demand. As a result, our CapEx in 2026 is expected to increase from this year's level. While continuing to maintain CapEx discipline, we will plan our investments in an optimal manner to support market demand. AI technological innovation is heralding fundamental changes across industries and society as a whole.
The memory market is transitioning into a new paradigm with the emergence of HBM, and AI driven demand is now beginning to expand across all product lines. We have led the market from the inflection point of this AI driven industrial transformation, leveraging our HBM competitiveness to deliver differentiated performance. Moving forward, we will further strengthen our technology and product competitiveness and solidify our leadership in the AI memory market by delivering products with the highest quality and performance. With that, we are now ready to take your questions.
[Foreign language] . Now Q&A session will begin. Please press star one, that is star and one if you have any questions. Questions will be taken according to the order you have pressed the number star one. For cancellation, please press star two, that is star and two on your phone. [Foreign language]. The first question will be provided by Sunwoo Kim from Meritz Securities. Please go ahead with your question.
[Foreign language]. Thank you very much for taking my question. This is Kim Sunwoo. It was mentioned that the HBM supply negotiations for 2026 have been completed. Could you share more details about the contract? Thank you.
[Foreign language] . Thank you very much for the question. We understand that there has been broad and deep interest in the HBM contract for next year. This year in particular has been challenging in fixing not only the supply volume but also the product mix due to various external factors. Not only that, there have been changes in the performance requirements for HBM products, which necessitated longer time in discussing the supply contract than expected. [Foreign language]. That said, our discussions over major issues with our clients have been completed, and the HBM supply plan for next year for major customers has been finalized.
AI [Foreign language]. Given the explosive growth in demand for HBM to keep building AI infrastructure and the company's product competitiveness, the company's HBM has been selling out since 2023, and the pricing, which I'm sure is the point of interest for many, has also been formed at a level that can sustain the current profitability. [Foreign language]. And as the HBM demand continues to accelerate, driven by longer-term growth trends in the AI market, the company believes that it will be unlikely for supply to catch up with demand in a short period of time.
[Foreign language]. Which means that the pace of HBM's growth will be determined by supply capacity, and the company's HBM is positioned for a much higher growth than conventional DRAM products. [Foreign language]. The company's HBM supply will remain tight compared to demand into 2027, but we will continue to do our best to supply products that meet customers' needs in a timely and secure manner. [Foreign language].
[Foreign language]. The following question will be presented by Jong-wook Lee from Samsung Securities. Please go ahead with your question.
[Foreign language].Thank you very much for taking my question, and congratulations on the performance. This is Kim Ki-tae from Samsung Securities. My question is on HBM. So it was mentioned that there have been higher performance requirements for HBM4, and my understanding is that it was to be higher than the JEDEC specifications. So have there been any difficulties for the company in meeting such higher performance requirements? And also for the HBM4E, does the company expect the performance requirements to be similarly high, to be higher than the JEDEC specifications? [Foreign language].
[Foreign language]. Thank you very much for the question. Now, with the AI inference market growing, the memory bandwidth is increasingly seen as the key factor that can upgrade AI performance. As for HBM4, the number of I/Os is already fixed at 2048, double the number of HBM3E. So the customers are now looking at higher speed as a way to increase the HBM bandwidth. [Foreign language].
Based on our number one technological competitiveness in HBM, the company is already fulfilling top-level specifications required by our customers. Moreover, we have already sampled products that meet customers' upgraded requirements faster than anyone in the industry and already started production for mass supply. [Foreign language]. And as explained earlier, with intensifying competition for AI chip performance, the memory wall phenomenon becomes more pronounced, where the memory performance becomes the bottleneck for technological development. As such, performance requirements for next-generation memory products, including HBM, will continue to be upgraded. [Foreign language].
With the industry-leading design capability and know-how as the primary supplier, the company will respond to customers' requirements in a timely manner for our next-generation product line as well and maintain our number one supplier position. Thank you. [Foreign language].
[Foreign language]. The following question will be presented by Simon Woo from Bank of America. Please go ahead with your question.
[Foreign language]. Thank you very much. This is Woo Dong-Jae from Bank of America, and congratulations on the recent performance, KRW 10 trillion in operating profit in a quarter. Now, my question is about the memory cycle. So what we have seen in the past is that usually if there is a boom in the memory cycle, then it will be followed by a downturn. And so I wonder whether the company sees any similarities with the recent memory boom with the historic cycles in the past. It appears as if recently the memory boom is also driving up demand for conventional memory. And also in terms of the inventory level, so there was also a cloud-driven boom some time ago, and at that time. So how does the company see the inventory level from then and today?
[Foreign language]. Thank you very much for the question. Now, it is true that the memory market this year has entered into what can be called a super boom cycle with surge in demand across all products, unlike earlier expectations. Such changes have only recently appeared, but the company sees this cycle to be a bit different from the super cycles that we witnessed in 2017 and 2018.
[Foreign language] . The biggest difference is that the current demand is driven by a much broader range of applications coming from the shift to the AI paradigm. [Foreign language]. AI creates upside to the overall demand as it is being added on top of existing applications. At the same time, for the longer term, it is also creating new applications like autonomous driving and robotics AI. So what we are seeing is a fundamental shift in the demand for memory driven by AI. [Foreign language] . In particular, computing recently has expanded to inference, promoting demand for not just AI servers but general-purpose servers as well.
The company believes that our total server set shipment, total server set shipment next year will grow at a high 10% level, and the server DRAM to meet the overall demand for conventional DRAM. [Foreign language] . At the same time, looking at the supply side, production can only grow so much even if we use more cleanroom space and capacity because of the growing share of HBM. [Foreign language]. These circumstances create a structural constraint against supply increase in the DRAM industry and are likely to serve as the driver for long-drawn-out memory super cycle. [Foreign language].
[Forein language] . The following question will be presented by Young Ho Ryu from NH Investment & Securities. Please go ahead with your question.
[Foreign language]. First of all, congratulations on the performance, and thank you for taking my question. Now, my question is on NAND. The recently strong demand for e SSD was said to be a structural change following the advent of the AI era. So could you elaborate more on the rationale behind such assessment? [Foreign language].
Allow me to explain more in detail the background to the higher NAND demand that we are seeing recently. First, there is a stronger build demand for both AI servers and general-purpose servers, with our server customers expanding their investment in AI, which in turn is driving demand for TLC products. [Foreign language] . At the same time, demand for storage is also accelerating as a result of growth in AI-generated data like images and videos, leading to HDD supply shortage. So for hyperscaler customers with high dependence on HDD, the recent developments have prompted them to turn instead to eSSD based on high-capacity TLC.
[Foreign language]. Having said that, the company sees the recent change in demand as something that goes beyond the current short-term supply and demand issues, and we see this actually as one that can potentially change the demand. In other words, it is one that can potentially structurally increase the eSSD demand. [Foreign language] . First, with the ever-advancing AI inference, the importance of RAG, or the Retrieval- Augmented Generation structure, is becoming even greater as a way to overcome the limitations of the existing LLM.
[Foreign language]. RAG moves beyond the current LLM approach, which basically generates responses based solely on the data that it was trained on. It will search related documents in external databases and generate the final response based on such search, which allows it to refer to the latest data as well as user-specific data, resulting in responses with much greater accuracy. [Foreign language]. To apply RAG on LLM, we need to additionally build outside databases that express and store data as vectors. In other words, we need vector databases.
And this is where eSSD becomes a must to enable speedy search of data. [Foreign language]. To support the scaling up of vector database and performance upgrade in RAG, demand for storage based on high-performance TLC and high-density QLC eSSD is expected to rise. 또[Foreign language] . In addition, there has been a spike in data in the processing that is needed for the inference process, which led to the need to offload part of the key-value cache that was generated at the GPU level to the lower layer memory.
This is because data processing and power consumption surge during the inference process, which calls for more efficient operation of the AI system. [Foreign language]. By offloading key value cache that was processed at the GPU all the way to the SSD, depending on the frequency of the data usage, they can increase the throughput per unit of power when providing inference to many users and reduce the response time per user. And this is one of the reasons why use of high-performance TLC eSSD is expected to grow.
[Foreign language]. As AI utilization keeps spreading, so will the role of the eSSD, resulting in its higher content growth. So essentially, what we are seeing now is the benefits of AI infrastructure spreading from DRAM to NAND as well. [Foreign language].
[Foreign language]. The following question will be presented by Min-sook Chae from Korea Investment & Securities. Please go ahead with your question.
[Foreign language]. Thank you for taking my question. The memory market appears to be changing into a specialty market with order first, sell later approach thanks to AI. On the field, then, does the company see any differences from the past in your discussions or interactions with customers?
[Foreign language]. Yes, it is true that in the memory market, some businesses have shifted to order first, produce later approach with the emergence of HBM marked by massive investment and long lead time. [Foreign language]. In addition, with strong HBM demand coming from our customers, the company was able to secure visibility into the customer's demand from the contracting stage with long-term agreements and respond with consistency. [Foreign language]. For both the memory industry and the company, this has led to greater market predictability and business stability than in the past when it was much more volatile. [Foreign language]. And the custom HBM will gradually increase from HBM4E.
So the products will be developed in close collaboration with customers from the early stage of design of the customer's GPU or ASIC products, unlike in the existing standardized HBM. This will lead to much longer-term and strategic transactions between customers and a small number of suppliers, contributing even more to business stability and profitability improvement on the part of the memory suppliers.
[Foreign language]. If I may add, memory companies are allocating capacity to ramp up HBM supply, and this has led to supply constraints in conventional memory, resulting in supply shortage of conventional memory for which demand is actually growing.
[Foreign language] . And as a result, we are seeing an increase in customers who want to sign long-term agreements for conventional memory products as well. Some customers are very much actively responding to the current supply shortage by issuing pre-purchase POs for 2026. [Foreign language]. Now, given the customer's demand and the company's capacity, for next year, not only HBM but DRAM and NAND capacity has essentially been sold out. [Foreign language].
The company will try to respond to customer's demand with optimum production and sales strategy. And as mentioned earlier, we will keep discussing the implications of the HBM-driven changes with our customers. [Foreign language].
[Foreign language]. The following question will be presented by Ricky Seo from HSBC. Please go ahead with your question.
[Foreign language] . Congratulations on the performance, and my question is on CapEx. Now, recently, the investments by global AI companies point to a very high investment level needed for the next few years to fulfill the demand for memory. It was mentioned that the company's CapEx in 2026 will increase over this year. Then what will be the extent of the increase is the first question. And then also for the longer term, it appears as if the investment or the CapEx into the new campuses like Yongin and others will also have to be far higher than one year ago.
[Foreign language]. Now, as global AI companies competitively expand investment with conviction in the growth and monetization of the AI market, there has been accelerated growth in demand for a wide range of memory products, including HBM, DDR5, and enterprise SSD. [Foreign language]. So, to respond to such surging demand, CapEx growth across the memory industry appears to be inevitable. And for the company, CapEx next year will far outpace the level of this year. [Foreign language]. For M15X, equipment installation has begun in earnest to be used to ramp up supply of HBM.
For conventional DRAM and NAND, we will accelerate tech migration in the existing capacity as a way of responding to the demand. 이[Foreign language] . At the same time, considering the Fab 1 construction in Yongin and preparation for construction of an advanced package plant in Indiana, U.S., investment in infrastructure is set to keep growing next year. [Foreign language] . But even with growing CapEx, the company will stick to its CapEx discipline and maintain a stable financial structure. [Foreign language].
[Foreign language]. The following question will be presented by Bo-young Choi from Kyobo Securities. Please go ahead with your question.
[Foreign language]. Thank you for taking my question. My questions are on the product and technology. Now, first, it was mentioned that there will be conversion to 1c- nanometer next year, for which there is a lineup for all products, and then also increase in the portion of the 321-layer NAND products. Then what is going to be the timeline for the ramp-up for each product? And then also another question is, what is the expected portion of the respective products by the end of next year?
[Foreign language]. Thank you for the questions. Now, under the principle that we respond with priority to demand with high visibility and profitability, our new capacity next year will center on HBM, for which supply contract has already been completed. For DRAM and NAND, we plan to respond to demand through tech migration in the existing capacity. [Foreign language]. For DRAM 1c-n anometer, development was completed next year, with mass production beginning this year.
Ramp-up will begin in full swing next year, and 1c- nanometer is planned to take up over half of the conventional DRAM capacity inside Korea by the end of next year. [Foreign language]. Based on the 1c- nanometer process, with the best performance and cost competitiveness, we will build up the lineups for all products, including DDR5, LPDDR5, and Graphics DRAM products, to respond to customer's demand in time and ensure profitability. 낸[Foreign language] . And in the case of NAND, our focus remains on improving profitability, and the plan is to keep improving profitability through tech migration rather than ramping up capacity.
[Foreign language] . That has been the case this year with tech migration from 176 layer to 238, then to 321. Next year, we will grow our supply not only in TLC but also in QLC, which will require ramp-up of 321-layer products. This means that we are making the preparation for 321-layer products to take up more than half of our NAND bit production by the end of next year. [Foreign language].
[Foreign language]. The following question will be presented by Dong-Hee Han from SK Securities. Please go ahead with your question.
[Foreign lnguage]. Thank you for taking my question, and my question is on the inventory level. There has been considerable inventory sell-down in the second quarter, and again, inventory appears to be much lower in the third quarter as well due to the very strong demand. So what is the company's inventory level now, and also among the customers?
[Foreign language]. With customers' demand outpacing expectations in the previous quarter, there have been concerns over excessive inventory buildup in the memory supply chain, as well as demand slowdown consequently, but customers' inventory level has become lower overall with accelerated set build, and added to that, investment in AI infrastructure has continued to grow, resulting in noticeably lower memory inventory among server customers. [Foreign language]. And for the company, the inventory level has also fallen QoQ in both DRAM and NAND as a result of the recent strength in memory demand.
This is particularly true for DRAM inventory, which remains extremely low, so much so that in the case of DDR5, the products must be shipped to customers straight out of production to ensure timely response. [Foreign language]. The company will continue to try to maintain a healthy inventory level for both DRAM and NAND to seamlessly respond to customers' demand. [Foreign language].
[Foreign language]. The following question will be presented by Nicolas Gaudois from UBS. Please go ahead with your question.
Good morning. Thanks for taking my questions. Regarding M15X that you mentioned you're opening earlier, are you able to address the faster ramp-up, pulling in your equipment delivery schedule for the first of all? And in that regard, could you more or less complete full equipment installation for the total vertical capacity for M15X by the end of 2026? And then, is it possible for you to pull in the schedule as well for Yongin Fab 1 Cleanroom Readiness, which I think initially you had pinned down for May 2027? Thank you.
[Foreign language]. Thank you very much for the questions and allow me to respond. Now, let me respond to the question about the company's plan for the Fab. [Foreign language]. The company had decided at the end of 2023 to make new investment in M15X to preempt the fast rising demand for HBM, which requires relatively bigger wafer capacity. [Foreign language] . After around two years of construction, the Fab finally opened early, in M15X a while ago, with equipment installation starting. We are now making the preparation for M15X to contribute to HBM production ramp-up starting next year.
[Foreign language]. A nd as the memory demand growth continues to accelerate much faster than expectation, we are also speedily moving ahead with the capacity ramp-up at M15X. [Foreign language]. As for the Fab 1 in Yongin that just started construction this year, we are working to pull up the schedule in light of the pace of demand growth and the earlier ramp-up at M15X. [Foreign language]. The company will keep trying to preempt capacity and Fab space by building state-of-the-art production infrastructure from M15X to Yongin Fab to enable flexible response to the ever-growing AI memory demand. [Foreign language].
[Foreign language]. The following question will be presented by SK Kim from Daiwa Securities. Please go ahead with your question.
[Foreign language]. Thank you very much for taking my question. It is on demand. Now, there have been a series of announcements of GPU and ASIC supply cooperation between Big Techs and AI companies, fueling expectations of further AI market growth. Then, against this backdrop, what is the company's outlook on HBM demand growth, as well as a broadening of the customer base?
[Foreign language]. Thank you for the question. Now, with upward adjustment in Big Tech's CapEx and increased investment by AI companies, the HBM market, even by a conservative estimate, will keep growing at an average of over 30% for the next five years.
[Foreign language]. I will point to our recent LOI with OpenAI for large-scale DRAM supply as an example of the very strong market demand for AI, as well as the need to secure AI memory based on HBM more than anything else when developing AI technology. [Foreign language].
As the primary supplier of not only GPU but ASIC for many customers, we are working with customers in the development of next-generation products and contributing to the development of the AI industry based on our differentiated product competitiveness and mutual trust. [Foreign language] . Thus, we are positioned to maintain a high share in the newly arising HBM demand among a broad range of customers and keep increasing our supply. [Foreign language].
[Foreign language]. The following question will be presented by Jay Kwon from JP Morgan. Please go ahead with your question.
[Foreign language]. Now, my question is on DRAM. The DRAM spot price is rising almost daily, and with the DDR4 and DDR5, almost in supply shortage as seen through the price premium. Of course, I'm sure that the company's contract price with the customers would be different from the spot price, but if the DRAM price maintains the current trend, it appears likely to reach the kind of profit margin similar to HBM. So what is the company's view on the DRAM profitability?
Even if temporarily it surpasses HBM's profitability, is it also foreseeable to shift the capacity mix, perhaps a bit away from the HBM to the conventional DRAM?
[Foreign language]. Now, it is true that the recent hike in DRAM price has narrowed the profitability gap between HBM and DRAM, but for the company, HBM's profitability remains high. [Foreign language]. If supply remains tight next year, DRAM's margin could rise closer to HBM, but the company does not plan to immediately adjust the capacity mix based on what can be a short-lived change in profitability.
[Foreign language] . Given the nature of HBM products, it is important to agree on the long-term volume with customers to make sure that there is seamless supply. [Foreign language]. A nd when we discuss long-term volume with customers, we also consider various factors like customer relations, long-term growth potential, as well as profitability.
[Foreign language]. Having said that, there have also been discussions over stronger binding contracts for conventional memory products as well, with customers issuing pre-purchase POs or asking for multi-year LTAs, and our decisions on capacity mix will be made in a way that can ensure optimum productivity. [Foreign language]. The company also sees the current trend to potentially prompt changes in the nature of memory business for the future. [Foreign language]. As the leading supplier of AI memory, the company was able to improve our fundamentals in memory business based on the high and stable profitability from HBM and have achieved differentiated performance.
Looking ahead, we will keep responding to customers' demand with a long-term view and achieve sustained growth along with the AI market. [Foreign language].
[Foreign language]. The last question will be presented by Peter Lee from Citigroup. Please go ahead with your question.
[Foreign language]. Now, as was explained several times, because of the good performance in the market as well as the growth in the AI market, good performance for the company has also been achieved and is expected to continue for some time. So, as a result, the company has turned around to net cash position this quarter, and also its FCF is expected to continue to improve on the back of much stronger performance. So, I realize perhaps this is a bit early, but then can we expect any changes to the shareholder return policy that was announced in the early part of this year?
[Foreign language]. Yes, it is true that the company's financial soundness is fast improving thanks to the stronger-than-expected performance in 2025. And yes, we have achieved net cash position in Q3 following the higher recovery of receivables with sales growth in Q2. [Foreign language]. It was explained as part of the current shareholder return policy that the company's aim in financial soundness is to maintain an appropriate level of cash that would allow us to keep stable business management through the differing industry cycles and to execute CapEx that is necessary to maintain our competitiveness.
[Foreign language]. Now, the recent upturn in the memory market has fueled demand growth, which in turn is driving up CapEx necessary to fulfill the demand. So, this means that what we see as appropriate level of cash also has to reflect this change. [Foreign language]. Furthermore, considering the huge growth potential in the AI memory market and the company's high return on investment, I believe that the shareholders will also agree that for now, the best use of cash is to reinvest it into our business while maintaining CapEx discipline. [Foreign language].
[Foreign language]. We apologize, but we are experiencing a brief technical issue. Please stand by.
[Foreign language]. As such, being in the first year of the new shareholder return policy that is announced at a three-year interval, we are not looking into additional shareholder return at this time. But we will keep looking into how we can maximize shareholder return by taking a comprehensive look into the changes in the environment, both inside and outside, such as market outlook and investment needs.
[Foreign language]. Thank you very much, and that concludes the SK hynix 2025 third quarter earnings release conference.