AI

Samsung’s Tiny Recursive Model TRM The 7M Parameter AI Outsmarting Giants Like Gemini and o3 mini

Samsung’s Tiny Recursive Model (TRM), a 7M parameter AI from its Montreal lab, challenges giants like Gemini and o3 mini by proving that recursion can beat scale.

5 min read

Samsung’s Tiny Recursive Model TRM The 7M Parameter AI Outsmarting Giants Like Gemini and o3 mini

In the relentless pursuit of smarter AI, where bigger often means better, Samsung’s AI lab in Montreal has thrown a curveball with the Tiny Recursive Model, or TRM.
This featherweight seven million parameter network crushes complex reasoning benchmarks that stump behemoths like Google’s Gemini 2.5 Pro, OpenAI’s o3 mini, and DeepSeek R1.

Unveiled in a paper titled “Less is More Recursive Reasoning with Tiny Networks” on October 8, 2025, TRM is not a typical large language model. It is a lean, two layer recursive reasoner built for puzzles that demand logic and adaptability rather than memorized data.
With just 0.01 percent the size of its rivals, TRM achieves state of the art scores on the Abstraction and Reasoning Corpus ARC AGI, reaching 44.6 percent on ARC AGI 1 and 7.8 percent on ARC AGI 2, outperforming models 10,000 times larger.

Trained on only one thousand examples, TRM defies the “scale is king” mantra dominating AI, proving that recursion and minimalism can unlock human like reasoning without billion dollar GPU farms. Created by Senior AI Researcher Alexia Jolicoeur Martineau at Samsung SAIL Montréal, TRM is fully open source on GitHub, embodying the “less is more” philosophy.


Origins Born from a Rebellion Against the Bigger is Better Dogma

The story of TRM starts in the pressure cooker of AI’s scale obsession. Since GPT 3’s 2020 debut, the field has equated parameter count with intelligence.
But as Jolicoeur Martineau noted in her October 8 X post, “The idea that one must rely on massive foundational models trained for millions of dollars by big corporations to solve hard tasks is a trap.”

Frustrated by LLM brittleness on abstract reasoning, the Samsung SAIL Montréal team set out to prove that small, specialized networks could excel where giants stumble.
TRM builds on the 2024 Hierarchical Reasoning Model (HRM), a 27 million parameter ancestor. By cutting it to two layers and seven million parameters and training on just a thousand reasoning tasks, the team achieved agility and clarity.

Released on GitHub with full code, weights, and data, TRM reflects Samsung’s broader vision: efficient, sustainable, and on device AI for the Galaxy S26 and beyond.
Its manifesto is simple yet radical: “Depth of reasoning trumps depth of parameters.”


How TRM Works Rethinking Intelligence Through Recursion

TRM does not try to be clever in one shot like traditional language models. Instead, it learns by trying, evaluating, and improving—a process known as recursion.

Let's imagine solving a Sudoku puzzle.
Most people do not instantly find the solution. They start with a guess, check if it works, spot the mistakes, and make corrections. This process might repeat five or ten times until the grid is correct. TRM works the same way.

When faced with a reasoning task, TRM makes an initial attempt. Then, using a built-in evaluator, it checks how well its output matches the rules or expected outcome. If the result is not good enough, TRM revises the answer and tries again. Each step improves upon the last until it either succeeds or hits a limit on how many tries it can make.

This recursive loop gives TRM a powerful advantage. Where traditional AI models often fail after a single wrong step, TRM can pause, reflect, and retry, and that's what makes it extremely powerful while being highly efficient.

This strategy allowed TRM to score 87.4 percent on the Sudoku Extreme benchmark, outperforming older models that relied on one-pass logic.
As Alexia Jolicoeur Martineau explains, “Recursion lets tiny networks think deeper than giants can chain.”


Benchmark Triumphs Crushing Giants on Reasoning’s Toughest Turf

TRM’s benchmark results are startling.

  • ARC AGI 1: 44.6 percent accuracy, beating HRM’s 40.3 and surpassing Gemini 2.5 Pro’s 37.0.
  • ARC AGI 2: 7.8 percent accuracy, topping o3 mini’s 3.0 and DeepSeek R1’s 1.3.
  • Sudoku Extreme: 87.4 percent, with maze solving at 78 percent.

Its efficiency is even more shocking. Training cost around ten thousand dollars using one A100 GPU for a few days.
That’s a fraction of GPT 5’s one hundred million dollar training budget.

VentureBeat called TRM “a wake up call for scale obsessed labs,” while Forbes praised it as “a sustainability milestone.”
Skeptics like Yann LeCun dismissed ARC as “toy puzzles,” yet TRM’s open source code makes verification easy, and forks already adapt it for vision tasks.


Why TRM Matters Challenging the Scale Myth and Opening Doors

TRM emerges during AI’s “scale crisis,” where compute and energy costs rise faster than innovation.
Samsung’s “less is more” approach turns that narrative on its head, showing that recursion can rival brute scale.
Running TRM on a smartphone NPU hints at a future where reasoning happens privately and locally.

For enterprises, it means cheaper AI that solves logic tasks without clusters of GPUs.
As Wccftech put it, “TRM beats Gemini 2.5 Pro on ARC AGI, pointing to efficient AGI paths.”
Its democratizing potential is immense—trainable on a laptop, transparent by design, and extendable by the community.

In an era of trillion parameter races, TRM whispers a counter truth: intelligence is not size, it is structure.


Final Thoughts

Samsung’s Tiny Recursive Model, unveiled on October 8, 2025, is a seven million parameter marvel that outsmarts models ten thousand times its size on reasoning benchmarks.
Its recursive logic proves that small can be mighty, sparking a rethink in AI’s race for scale.
Open source, efficient, and built for sustainability, TRM signals the dawn of intelligent minimalism.

Curious about how we even measure AGI progress? You might enjoy our deep dive in AGI Demystified, where we break down what intelligence really means in machines.

If you're fascinated by lightweight models running locally, our Ollama Web Search feature shows how live search can make even 3B models surprisingly capable.

Explore the TRM's paper here.

Samsung’s Tiny Recursive Model TRM The 7M Parameter AI Outsmarting Giants Like Gemini and o3 mini · FineTunedNews