
Samsung Electronics Co. and SK Hynix Inc. are said to be developing the sixth-generation high-bandwidth memory (HBM4) chip prototype for Tesla Inc., which has joined US big tech peers in a race to develop their own artificial intelligence chips, according to sources in the semiconductor industry on Tuesday.
A slew of next-generation HBM chip orders the South Korean memory chip cross-town rivals have received suggest that the AI-driven HBM boom will continue through next year.
Industry sources said that the US EV giant has asked the Korean chip duo to supply HBM4 chips for general use, and it is expected to choose one of the two companies as its HBM4 supplier after testing their samples.
The Korean chipmakers have been developing customized HBM4 chips for US big tech companies such as Google LLC, Meta Platforms Inc. and Microsoft Corp., which have been seeking to lower their reliance on Nvidia Corp.’s AI chips.
Joining the big tech companies, Tesla is expected to use the next-generation HBM chip to enhance its AI chip capability.

Tesla operates Dojo, its custom-built supercomputer designed to train its “Full Self-Driving” neural networks. This is also expected to be the cornerstone of Tesla’s AI ambitions beyond self-driving.
HBM chips are one of the key parts in running the supercomputer to train AI models with massive datasets, and Tesla is expected to use the sixth-generation HBM chip in Dojo also powered its own AI chip D1.
The HBM4 chip also could be used in Tesla’s AI data centers under development and its self-driving cars, which are currently fitted with HBM2E chips for pilot programs.
WHY HBM4?
More advanced HBM chips can further improve efficiency in processing massive data and AI model training.
The performance of the sixth-generation HBM chip is expected to be significantly improved compared with that of its predecessors, which were built with a base die method that connects the bottom layer of an HBM stack to the graphics processing unit (GPU).

The HBM4 uses a logic die, which sits at the base of the stack of dies and is a core component of an HBM chip.
According to SK Hynix, the HBM4 chip delivers a bandwidth that is 1.4 times faster than that of the fifth-generation, HBM3E, and consumes about 30% less power.
Considering the HBM3E delivers a bandwidth of 1.18 terabytes (TB) per second, the HBM4’s bandwidth is expected to top 1.65 TB/s. The newer model’s drain power voltage (VDD) is also expected to drop to 0.8 V from 1.1 V.
FIERCE HBM4 BATTLE
The world’s big two memory chipmakers, Samsung and SK Hynix, are going all-out to take the lead in the HBM4 market, which is expected to bloom later next year.
The HBM market is forecast to grow to $33 billion in 2027 from $4 billion in 2023, according to Morgan Stanley.
The HBM market is currently led by SK Hynix, a major HBM chip supplier for the global AI chip giant Nvidia, which controls more than 90% of the global AI chip market.

To catch up to SK Hynix, its bigger memory rival Samsung Electronics has even formed a partnership with foundry archrival Taiwan Semiconductor Manufacturing Company Ltd. (TSMC) under an agreement, in which TSMC will manufacture base dies for Samsung’s HBM4 chips upon requests by the latter’s customers.
Samsung Electronics currently promotes a turnkey order for HBM chips, covering from memory architecture design to production and foundry.
If it bags an HBM4 order from Tesla after quality tests, it would gain a chance to turn the tide in the global HBM market.
But SK Hynix is also expected to accelerate the development of HBM4 chips to win orders from Tesla with high AI ambitions, a move expected to cement its leadership.
SK Hynix has been actively seeking to develop automotive HBM chips considered one of the next-generation memory chips.
By Chae-Yeon Kim
why29@hankyung.com
Sookyung Seo edited this article.