SK Hynix to ramp up 1c DRAM production 8-fold in 2026

SK Hynix’s DRAM plant in Icheon

SK Hynix Inc., a crucial supplier of high-bandwidth memory to Nvidia Corp.’s artificial-intelligence processors, is set to boost output of its sixth-generation 10-nanometer DRAM, known as 1c DRAM, by roughly eightfold next year, as the AI industry’s center of gravity moves beyond training and toward large-scale inferencing.

According to industry sources on Thursday, the South Korean memory giant is planning a major capacity overhaul at its Icheon campus south of Seoul, converting existing lines to add at least 140,000 wafers a month dedicated to 1c DRAM production.

“SK Hynix even mulls expanding capacity between 160,000 and 170,000,” said a person familiar with the matter.

With its total DRAM output averaging around 500,000 wafers a month, roughly 32% of its capacity could soon be allocated to 1c DRAM. Initial output of about 20,000 wafers is expected by year-end.

At the same time, the company plans to ramp up production of 1b DRAM, the previous generation used in its HBM4 chips set to be delivered to Nvidia, at its new M15X fab in Cheongju.

Industry analysts forecast that SK Hynix’s capital spending could surpass 30 trillion won ($20.4 billion) next year, outstripping this year’s estimated 25 trillion won, as the chipmaker races to reinforce its lead in both premium HBM and advanced commodity DRAM.

A NEW BALANCE IN AI MEMORY DEMAND

SK Hynix’s renewed emphasis on cutting-edge general-purpose DRAM reflects a broader shift in the AI hardware market.

High-bandwidth memory has dominated headlines as the key enabler of large-model training, but inferencing, running those models in real time at massive scale, leans more on conventional DRAM with high power efficiency and favorable cost profiles.

That dynamic is drawing fresh attention to double data rate 5 (DDR5), low-power double data rate (LPDDR) and graphics double data rate (GDDR)-class chips, which SK Hynix will build on its 1c node.

The 1c DRAM has become a linchpin in the AI memory stack, serving both as the core die for sixth-generation HBM, or HBM4, and as the high-density, low-power standard shaping advanced server and mobile memory.

(Courtesy of SK Hynix)

Samsung Electronics Co. also said its own 1c DRAM has passed internal qualification for HBM4, signaling a parallel bet on the same node.

The shift is already visible in new product designs.

Nvidia’s upcoming Rubin CPX accelerators will pair their processors with GDDR rather than HBM, underscoring that top-tier AI systems are no longer built around a single memory architecture.

Major cloud providers, including Google, OpenAI and Amazon Web Services, are also designing custom AI chips that rely on large pools of conventional DRAM in proprietary configurations.

SK Hynix is positioning 1c DRAM at the center of this transition.

The chip underpins the next generation of graphics memory, including GDDR7, and will feature in Nvidia’s small outline compression attached memory module 2 (SOCAMM2), an emerging standard for AI servers and PCs that trades some bandwidth for notably better power efficiency.

Analysts expect SK Hynix to supply 1c DRAM for SOCAMM2 once Nvidia’s Grace-based systems ramp.

The Korean chip giant has already improved yields on 1c DRAM to above 80%, giving it a cost advantage as demand accelerates.

Executives believe the combination of high-end HBM for training and advanced DRAM for inferencing could usher in a multi-year “AI memory supercycle,” with both products contributing to volume growth.

Overnight, its largest customer and the world’s most valuable AI chip company Nvidia reported stronger-than-expected third-quarter earnings, easing concerns over an AI bubble and reinforcing expectations of a prolonged upcycle for memory manufacturers.

Latest News from Korea

Latest Entertainment from Korea

Learn People & History of Korea