It’s interesting to observe the effects of AI on chess as a sport. You might expect that since AI defeated Gary Kasparov in 1997, and subsequently, AI became far better than any human player, interest in chess would diminish. But interestingly, there has been a surge in interest over the past 5 years, with more and more people playing and watching it (COVID and the Queen’s Gambit get some of the credit).
Watching experts analyze AI chess games can also be fascinating. It’s like a human trying to explain the behavior of a super-intelligent alien. The experts are sometimes baffled by a move the AI makes, only to see it pay off a dozen moves later.
What we see regarding cheating is also interesting. Any chess novice can now beat the human world champion if they can sneakily access a chess engine during the game. This means that in-person games are a more trustworthy measure of skill than online games – and even then, there have been cheating accusations for in-person games (such as the famous but never confirmed “vibrating anal beads” accusation).
Some claim that at the very highest levels of the game, AI has made it less interesting for experienced observers – that grandmasters spend more of the opening game doing moves that are not novel (due, in part, to AI’s use in training. Others have argued that AI has helped democratize skill development by making it easier to learn faster through immediate AI feedback.
Many aspects of the world will change dramatically with AI. Chess is a special example because it was impacted particularly early.
It happened so long ago that some people don’t want to call Deep Blue (the system that beat Kasparov) an “AI” since it works so differently than modern ones. But, by the reasonably standard definition of AI: “a computer system able to perform tasks that normally require human intelligence,” it was AI, just an early form of it. If we want to say that Deep Blue is just a glorified tree search algorithm, then we might have to say that ChatGPT is mainly just a bunch of linear algebra. For all we know, human thought could turn out to be decomposable into a sequence of simple algorithms (I, for one, hope this isn’t the case).
This piece was first written on January 5, 2025, and first appeared on my website on January 26, 2025.
Comments