A Country of AI Savants in a Data Center

March 22, 2026 · lukasz

My high-level view on Deep Learning models, including transformer-based LLMs, has always been that they are sophisticated statistical pattern-matching machines, implementing a superhuman analogue to the heuristic System 1 of human cognition (as per Kahneman’s “Thinking, Fast and Slow”). Like most of Machine Learning, LLMs sit squarely on level 1 of Judea Pearl’s ladder of causation – reasoning by association. Can they climb the ladder to intervention (level 2) and counterfactuals (level 3)? Can a robust analytical System 2 emerge, given enough data and the right training techniques?

Until late 2025, I saw little concrete evidence to support such claims. I am still skeptical, but with higher uncertainty than before. The maturing of reasoning models and agentic frameworks provides some form of System 2 for LLMs. The question is, how robust, scalable and generalizable are current System 2 training methods? Are they makeshift patches on top of System 1, or can they scale to models capable of genuine universal System 2 reasoning? Can reinforcement learning lift LLMs into artificial general intelligence? There is no shortage of prominent AI researchers (Yann LeCun, Ilya Sutskever, Andrew Ng, Fei-Fei Li, Richard Sutton) skeptical of LLMs as a path to AGI, in contrast to more hype-friendly attitudes of tech CEOs. If fundamental conceptual breakthroughs are still necessary for general artificial System 2 cognition, how soon will they arrive? Fundamental breakthroughs are fundamentally unpredictable.

One thing is certain though: we will get a country of AI agents in a data center. Will these AI agents be Nobel-calibre geniuses or glorified idiot savants? From the perspective of an average person’s job security, it might not matter much. Agentic AI savants in a data center could perform a large majority of economically valuable tasks just as well. Except for a minuscule number of roles, modern economy doesn’t require extreme levels of intelligence. Superhuman System 1 augmented with an improvised okay-ish System 2 may be good enough.

Ignoring any future progress, currently available artificial System 2 cognition is not too bad already. Top-tier reasoning models have started to show, at least occasionally, what I would call glimpses of genuine intellect: Erdős problems solved with AI assistance, progress on FrontierMath, Donald Knuth stumped by Claude Opus 4.6 solving a research problem. Earlier in mid-2025, DeepMind and OpenAI achieved IMO gold, but while impressive this is less of an advance than hyped to be. There are of course areas of science where specialized AI systems already have substantial impact, famously protein folding research. I am concentrating on mathematics because it represents the pinnacle of pure abstraction and complex logical reasoning – a field where statistical pattern recognition alone seems least likely to yield significant advances.

That said, I wouldn’t read too much into the recent success stories. The solutions proposed by AI models often seem to rely on a superhuman ability to search, combine, and apply known results and reasoning techniques – not on the invention of new abstract concepts, which is central to creative problem solving. The generalization capacity of reasoning models may still be distribution-anchored rather than concept-creating. Large-scale reinforcement learning might just be surfacing reasoning patterns latent in the training data. So far, there is little compelling evidence that today’s language models can originate genuinely new ideas – let alone build coherent theories, devise novel workflows, or craft broadly useful conceptual frameworks. Whether future models can navigate truly unfamiliar territory, and, more importantly, whether they can invent useful novel abstractions, remains very much an open question. But how many jobs really require creating new abstractions?

Beyond the limits of abstraction, other significant shortcomings remain. Even the best models sometimes blunder, hallucinate, or display surprising lack of deeper understanding. This unevenness, however, has been diminishing, to the point that it no longer poses insurmountable problems for practical applications.

In terms of real-world economic impact, much progress has happened over the past several months. The releases of Opus 4.5 at the end of November 2025 and GPT-5.2-Codex in mid-December 2025, combined with improvements in agent harnesses, made AI coding assistants graduate from blundering half-insane junior devs to huge productivity multipliers. They still need technically competent supervision, but this need will be shrinking (not nearly to zero if we get AI savants “only”).

When programming, I no longer need to write much code. AI does it for me. But it’s not just about coding. The underlying technology is general. Coding is only the first use-case it has been adapted to and optimized for.

There will be economic disruption, soonish. Not necessarily massive unemployment, but definitely disruption. Economically valuable tasks will be automated. Depending on how AI progress plays out, there may still be plenty of other tasks left for humans to do. Even so, whole job categories will disappear or be transformed beyond recognition.