Thanks for weighing in. I can’t agree it’s bullshit because it is doing something radically different (a thing that isn’t possible unless you have the Google Cloud at your disposal.)
Here’s a good summary.
" The DeepMind team achieved a remarkable success with the Alpha Zero project. It showed that it is possible to use a Monte Carlo method to reach an enormous playing strength after only a short training period — if you use the Google Cloud with 5000 TPUs for training, of course!
Unfortunately, the comparison with Stockfish is misleading. The Stockfish program ran on a parallel hardware which is — if one understands Tore Romstad correctly — only of limited use to the program. It is not clear precisely how the hardware employed ought to be compared. The match was played without opening book and without endgame tablebases, which both are integral components of a program like Stockfish. The chosen time control is totally unusual, even nonsense, in chess — particularly in computer chess.
Of the 100 games of the match, DeepMind only published ten wins by Alpha Zero, unfortunately without information about search depths and evaluations."
I also noted somewhere that Deep Mind scores nearly all of its wins with White and very few with Black, whereas computers playing each other usually have their wins on a 55-45 ratio just like human players. That’s pretty weird and maybe exploitable.
So it comes down to … this is a big advance in AI but it really has no immediate or obvious benefits for chess players.
GMs still say that Komodo and Houdini are the programs of choice for real world players.