The chess program Stockfish can crush Norwegian chess grandmaster Magnus Carlsen, who is largely considered the greatest player in history. However, it cannot replace him. These super-strong platforms have reshaped elite play and preparation, so top humans now use a combination of engines, surprise, and psychology to keep classical chess alive. The result is not the end of chess, but a redefinition of what human skill looks like when AI is in the loop.

Modern Stockfish plays at around a 3653 rating, nearly 800 points above Carlsen's peak. Yet this asymmetry did not kill elite competition. Instead, it reshaped how professionals prepare. Every serious player now treats engines as non-negotiable, from casual users on Chess.com to world champions. Although banned in play, engines are central to pregame work, functioning as always-on sparring partners, opening laboratories, and serving as postmortem analysts.

The deepest impact has landed in opening theory. Classical lines like the Ruy Lopez, Italian Game, and Sicilian Najdorf were once judged mainly by human experience and taste. Today, anyone can feed these openings into the same software and get identical numerical verdicts.

Masters often play the first 10 to 20 moves from memory, following engine-checked sequences that both sides know lead to equality if handled precisely. Draw rates among top players have risen, and some events resemble formal demonstrations of mutual preparation rather than contests.

The 2018 World Championship between Carlsen and Fabiano Caruana crystallized those fears. Over 12 classical games and more than 50 hours, neither player scored a decisive result. Every game was drawn, and the title was decided only in rapid tiebreaks. For many, it seemed proof that classical chess, under perfect preparation, had reached "draw death."

Carlsen's reaction was to change the parameters. After another draining title defense in 2021, which included an eight-hour game and seven draws, he declined to defend his crown again, citing a lack of motivation. He did not abandon slow chess – he won Norway Chess in 2025 and still sits atop the rating list – but he shifted his focus to rapid and blitz, where shorter time controls raise the error rate and preparation counts for less.

He also became an entrepreneur and advocate for freestyle chess, a format that randomizes starting positions and renders memorized engine trees largely useless. Carlsen now holds titles in all three main formats and has one of the lowest draw rates at the top, partly because playing for a safe half-point against him is often more realistic than playing to win.

Younger grandmasters have taken a different approach: instead of minimizing AI's influence, they exploit its blind spots. Engines optimize moves, not humans – they assume perfect play from both sides and ignore whether a decision is psychologically difficult for a fallible opponent. A new generation, raised on engines, deliberately chooses objectively "second-best" or slightly inferior lines if they are less familiar and harder to handle over the board.

One game at the 2024 Candidates Tournament highlighted this shift. Facing the Ruy Lopez, 18-year-old Indian grandmaster Praggnanandhaa Rameshbabu played a move engines have long flagged as flawed compared with standard options. Former world championship challenger Peter Leko, commenting live, said he had not seen that response in 25 years and described himself as "speechless." Rameshbabu's opponent, pushed "out of book," was forced to think for himself instead of relying on deep preparation, and Rameshbabu went on to win.

Peter Doggers, author of The Chess Revolution, sees this as intentional.

"Five years ago, players gave up on getting clear advantages out of the opening because it's just not really possible anymore," he told Bloomberg. "Now they're going for surprises. The computer says it's equality, but ... it's a strange move."

He points to another 2024 Candidates game in which Hikaru Nakamura steered into a theoretically weaker but much less studied branch of a mainstream opening. Nakamura only drew, but the game showed how valuable it can be to present "equal" positions that engines like but humans don't yet know.

This behavior also highlights a gap between specialized engines and general-purpose language models. Neural-network-enhanced engines like AlphaZero and modern Stockfish learn by playing millions of games against themselves, converging on moves that maximize win probability. They do not explain those choices in human terms.

Large language models, by contrast, are built to explain. They can narrate plans and dress moves in persuasive language, but in actual play, they perform poorly. Sometimes they even cheat. When asked to justify a sequence of moves, they often produce confident but incorrect or invented commentary because they are matching patterns in text, not running a deep search.

German grandmaster and trainer Jan Gustafsson calls uncritical reliance on engine suggestions "hitting spacebar," referencing the shortcut for forcing the engine's top move. It is a useful but risky habit. If your opponent follows the same line your file covers, you look strong, but as soon as they deviate, you can lose without understanding why.

Computer scientist Cal Newport has argued that we will eventually see chatbots as a kind of archaic AI, the Usenet phase before a web of domain-specific tools. What's happening in chess supports that view.