
LG AI Research, the artificial intelligence (AI) research and development unit of South Korea’s LG Group, has unveiled Exaone Deep, an open-source AI model designed to rival industry leaders in reasoning.
The company on Tuesday announced that its AI reasoning model outperformed its global rivals such as OpenAI’s GPT models, Google DeepMind’s Gemini and China’s DeepSeek in scientific comprehension and mathematical logic.
The Korean AI model, which builds on LG’s large language model Exaone 3.5, demonstrates outstanding performance in complex problem-solving, with a strong focus on long-context understanding and instruction-following accuracy, LG said.
Only a handful of companies with foundation models – OpenAI, Google, DeepSeek and Alibaba – have developed AI inference models.
Machine learning is the process of using training data and algorithms under supervision to enable AI to imitate the way humans learn.
AI inference is the ability to apply what the AI model learned through ML to decide, predict or draw conclusions from information never seen before.

LG pins high hopes on Exaone Deep, Korea’s first AI inferencing model that can compete in a neck-and-neck race against global big tech rival, the company said.
With the new AI inferencing model, the Korean tech giant is gearing up for the era of Agentic AI, where AI independently formulates hypotheses, verifies them and autonomously makes decisions without human instructions.
PERFORMANCE PROWESS
Exaone Deep’s all three models – Exaone Deep 32B with 32 billion parameters, Exaone Deep 7.8B with 7.8 billion parameters and Exaone Deep 2.4B with 2.4 billion parameters – beat their key rivals in mathematical reasoning.
The Exaone Deep 32B demonstrated the equivalent performance at 5% the size of its competing model DeepSeek R1 with 671 billion parameters.
It scored the highest 94.5 points on the Korean college entrance exam, CSAT, math test and 90.0 points on the American Invitational Mathematics Examination (AIME) 2024.
In AIME 2025, it scored on par with the DeepSeek-R1 model.

The Exaone Deep 7.8B and 2.4B models also ranked first in all major benchmarks within the lightweight model and on-device model categories, respectively.
All three models also scored well in tests evaluating PhD-level problem-solving abilities in physics, chemistry and biology. Especially, the Exaone Deep 7.8B and 2.4B models have achieved the highest performance, boasting their excellent capability in lightweight and on-device sectors above their global peers such as OpenAI’s o1-mini.
Exaone Deep also proved its strength in coding and problem-solving, underscoring its potential for application in software development, automation and other technical fields, which require high levels of computational accuracy.
Its performance also excelled in general language understanding, securing the highest massive multitask language understanding (MMLU) score among Korean models.
Exaone Deep’s evaluation results are posted on Hugging Face, a global open-source AI platform.
RACE TO DOMINATE NEXT-GEN AI INFERENCE

The unveiling of Exaone Deep marks LG’s deepening commitment to AI-driven innovation, as competition intensifies in the global race for highly-capable, domain-specialized models.
Especially, cost-effective models are garnering great attention following the rise of China’s DeepSeek in reasoning abilities.
LG AI Research has trained Exaone Deep with quality datasets while scaling down its parameters.
It uses domain-specific data from LG’s affiliates and only select public data to improve accuracy in reasoning, the company said.
LG will presents its new reasoning three AI models – Exaone Deep 32B, Exaone Deep 7.8B and Exaone Deep 2.4B – at Nvidia Corp.’s AI conference for developers, GTC 2025, on March 17-21 in San Jose, CA.
Last summer, LG AI Research, an institute of the group’s holding company LG Corp., unveiled Exaone 3.0, an improved version of its predecessor showcased in July 2023.
The group revealed the first version of Exaone in December 2021.
By Eui-Myung Park
uimyung@hankyung.com
Sookyung Seo edited this article.