Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

AlphaZero (Computer)
Number of games in database: 10
Years covered: 2017
Overall record: +10 -0 =0 (100.0%)*
   * Overall winning percentage = (wins+draws/2) / total games.

Repertoire Explorer
Most played openings
E17 Queen's Indian (3 games)
C11 French (2 games)
C65 Ruy Lopez, Berlin Defense (2 games)
E16 Queen's Indian (2 games)

Search Sacrifice Explorer for AlphaZero (Computer)
Search Google for AlphaZero (Computer)


[what is this?]

AlphaZero is an application of the Google DeepMind AI project applied to chess and Shogi. In late 2017, experiments quickly demonstrated itself superior to any technology that we would otherwise consider leading-edge.

(1) Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm -


Last updated: 2018-02-08 00:54:30

 page 1 of 1; 10 games  PGN Download 
Game  ResultMoves YearEvent/LocaleOpening
1. AlphaZero vs Stockfish 1-0952017AlphaZero - StockfishC11 French
2. AlphaZero vs Stockfish 1-0522017AlphaZero - StockfishC11 French
3. AlphaZero vs Stockfish 1-0602017AlphaZero - StockfishE15 Queen's Indian
4. AlphaZero vs Stockfish 1-0682017AlphaZero - StockfishE16 Queen's Indian
5. Stockfish vs AlphaZero 0-1872017AlphaZero - StockfishC65 Ruy Lopez, Berlin Defense
6. AlphaZero vs Stockfish 1-01002017AlphaZero - StockfishE16 Queen's Indian
7. Stockfish vs AlphaZero 0-1672017AlphaZero - StockfishC65 Ruy Lopez, Berlin Defense
8. AlphaZero vs Stockfish 1-0702017AlphaZero - StockfishE17 Queen's Indian
9. AlphaZero vs Stockfish 1-01172017AlphaZero - StockfishE17 Queen's Indian
10. AlphaZero vs Stockfish 1-0562017AlphaZero - StockfishE17 Queen's Indian
  REFINE SEARCH:   White wins (1-0) | Black wins (0-1) | Draws (1/2-1/2) | AlphaZero wins | AlphaZero loses  

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 10 OF 10 ·  Later Kibitzing>
Jun-15-18  WorstPlayerEver: Leela ID 411-WPC

Teasing Leela. A curiosity.

1. c4 e5 2. g3 Nf6 3. Bg2 Bd6 4. Nc3 Bb4 5. e4 Nc6 6. Nge2 Bc5 7. h3 d6 8. d3 a5 9. O-O O-O 10. Kh2 Bd7 11. f4 Nd4 12. Nxd4 Bxd4 13. f5 h6 14. h4 c6 15. g4 Nh7 16. Kg3 Kh8 17. Ne2 Bc5 18. Nc3 Rg8 19. Qe2 a4 20. Rb1 Bd4 21. Bd2 a3 22. b3 g6 23. Kh3 h5 24. g5 f6 25. Kh2 fxg5 26. hxg5 gxf5 27. exf5 Bxc3 28. Bxc3 Qxg5 29. Be1 Bxf5 30. Rf2 Qg3+ 31. Kh1 Bxd3 32. Qxh5 Bxb1 33. Re2 Qg4 34. Qxg4 Rxg4

Premium Chessgames Member
  AylerKupp: I haven't visited this page in a while and it seems that it has turned into a Leela Chess Zero (LCZero) page rather than an Alpha Zero page. Oh well, I suppose it was inevitable.

I've been meaning to post on an interesting development. In AlphaZero - Stockfish (2017) (kibitz #144) I said that "I also think that it would be relatively straightforward for any engine to replace their hand crafted evaluation function with a simulation game approach as used in AlphaZero." but I thought that the most likely candidate would be Stockfish since it's code is in the public and there might be quite a few programmers that would be interested in teaming up and making that modification. I also said that "It would be<VERY> interesting to see how a version of Stockfish with such a probabilistic approach to its evaluation function would do against the same version of Stockfish with its "classical" evaluation function and with both versions running on the same hardware configuration."

Well, it has come to pass but with Komodo instead of Stockfish. Since early June Komodo 12 has been shipping with an option to use MCTS instead of alpha-beta pruning to determine a position's evaluation. And it didn't take all that much time, about 6 months since the publication of the initial AlphaZero paper in Dec-2017 and the release of Komodo 12 MCTS in June-2018. So I still think that it would be <VERY> interesting to see how Komodo 12 "classical" performs in a match with Komodo 12 MCTS when running on the same hardware. It's not obvious by looking at their lists that Komodo 12 MCTS is being used in either the CCRL 40/40 or CEGT 40/20 engine tournaments.

Maybe I'll try to do that once I purchase Komodo 12.

Jul-10-18  nok: <Apr-19-18 nok: Open a page for Leela, cg.>

AlphaZero (Computer) (kibitz #178)

Being an open learning project anyone can contribute to, Leela is more important than Alpha. But cg's management is a bit slow on the draw.

Premium Chessgames Member
  AylerKupp: <<nok> Leela is more important than Alpha. >

I agree. Now that Google/Deep Mind has gotten the publicity they wanted from AlphaZero I doubt that you will hear from it again, although it's technology might find its way into other products. Similar to what happened with Deep Blue in 1997 when IBM decided that it had achieved what it wanted to achieve publicity-wise and so they dismantled it.

Sep-13-18  WorstPlayerEver: Leela still tries to break my Caro-Kann defense.

14. 0-0-0

click for larger view

Lately Leela began to have a more creative approach. She seems to 'get more into it.'

Not an understatement, she's not much better than 4 months ago, but I have the feeling at some point she might have a rapid increase of elo.

Her approach is methodical. While Stockfish may seem to make weird moves sometimes, instead Leela blunders, because her plan did not work out.

Usually there's a better move, which Stockfish will calculate, given a time limit, but usually Leela switches to another variation earlier in the game.

Which always leads to new danger for her opponent. Of course, usually a move SF does not prefer.

However, instead of making large blunders, it's interesting to observe the blunders; they become relatively less decisive over time. A missed tactic here and there.

In other words: her approach is creepy. As the diagram shows: the position is complicated.

Leela gives herself advantage and so does Stockfish.

14. O-O-O Qd6
15. Rxd3 Be6
16. Qh6 Rg8
17. g7

click for larger view

All Leela wants to do is going for the kill. She's intelligent, extremely fit for the job, does not hesitate; plays tempo.

17. g7 Be7
18. Re3 Na6
19. Rh8 O-O-O
20. Nce2 Qd7
21. Nf4 Bf7
22. Nge2 c5
23. c3

click for larger view

SF thinks it's -1, Leela equal.

23. c3 Bd6
24. Qxf6 Nc7
25. dxc5 Bxc5
26. Nd4 Ne8
27. Qh6

click for larger view

25. dxc5 was a blunder, 25. Qh6 was better, but White's position was difficult anyhow. Now Black will grab the pawn while e6 is covered.


The mysterious 1. e4 c6 2. d4 d5 3. Nc3 Nf6 Leela/WPE variation, listed as 'unorthodox reply' of the Caro-Kann opening. The nerve! It's also unmentioned in NIC's Keybook.

A Soondarsingh vs K Al Khelaifi, 2014

I admit it; it's a hopeless opening, but it's no worse than the Grob. To name one. However, it's also undiscovered territory.

PS I did not give the opening moves. Because chess might get exciting. We don't want that. We want happy chess. With Max&Co.

Sep-14-18  WorstPlayerEver: Anyway, Leela now has set her teeth in the Leela0/WPE Caro-Kann, she can't get enough of it.

Especially the Space Invader Attack:

1. e4 c6
2. d4 d5
3. Nc3 Nf6
4. e5 Nfd7
5. e6

click for larger view

5... fxe6
6. Nf3 g6
7. h4 Bg7
8. h5

Another one:

click for larger view

8... e5
9. Ng5 Nf8
10. Qf3 Bf5
11. g4

click for larger view

Black player must be mad or drunk. Maybe insane is a better description.

11... Be6
12. hxg6

click for larger view

Mkay, seems Leela is some kind of gang leader, dunno.

12... e4 meet my little friend, Leela 13. Qxf8+, d'uh not that one! SF junior did not see that coming...

click for larger view

13... Bxf8 what else!? 14. Ne6+ not bad, I must say, not bad at all..

click for larger view

14... Qd7 15. g7 Bxg7 16. Nxg7+

click for larger view

16... Kf7
17. Nf5 e6
18. Ng3

click for larger view

(According to SF9, the last move is a disappointment. Leela outplayed SF9 in the opening, no doubt about it. 18. Nh6+ seems promising. 18... Kf7 19. g5 ironically Leela did not see this 19... Qd7 20. Be3). Leela thinks she's doing well:

18... Qe7
19. Bd2 Nd7
20. O-O-O Raf8
21. Ncxe4 dxe4
22. Nxe4 Rfg8
23. g5

click for larger view

A crazy position. SF a bit confused now, between 23... Ke8 and 23... a5

23... Ke8
24. Re1 Kd8
25. Bc4 b5
26. Bb3 Rg6
27. f4 h6
28. c4 bxc4
29. Bc2 and the position is unclear, but drawish

click for larger view

Sep-14-18  WorstPlayerEver: PS 18. Nh6+ Ke8 19. g5 Qe7 20. Be3 a5 21. 0-0-0 SF thinks it's winning for White. SF not scared for no thing.

click for larger view

Of course, this game must be analyzed further, but this game shows how well Leela can handle the first moves (in this case 17) by now. What's more, she keeps on improving. Who knows what's still in the can, when you have seen this game. How she goes all-in.. impressive.

This truly defensive variation, 1. e4 c6 2. d4 d5 3. Nc3 Nf6, took Leela between 25-50 games before reaching a breakthrough. Every game she tried something different. The outcome is this most stunning line. The Queen sac makes it extra spectacular. IMHO really chess food for thought, if you weren't convinced yet... about The Future...

Sep-14-18  WorstPlayerEver: PPS this was all posted live during the game. Expecting the usual draw. Also, a little bit, because I expected something exciting going to happen.

I only did not know when. Certainly not all of a sudden. NB 13. Qxf8+ is a long term positional sac, in a difficult position to evaluate.

The first 17 moves must've taken 15 seconds for Leela.

Something to wonder about, I'd say.

Scary.. huh?

Sep-14-18  WorstPlayerEver: This an improvement of the 13-Sept game.

1. e4 c6
2. d4 d5
3. Nc3 Nf6
4. e5 Nfd7
5. e6 fxe6
6. Bd3 g6
7. h4 e5
8. h5 e4
9. hxg6 exd3
10. Qh5

click for larger view

Crazy Bishop sac from Leela, but unsound. She keeps on playing this variation, as if she thinks it has some interesting points. You never know where she's up to next. Great fun.

10... Bg7
11. gxh7+ Kf8
12. Bh6 Rxh7
13. Bxg7+ Rxg7
14. Qh8+ Rg8
15. Qh6+ Kf7
16. Nf3 Nf6
17. Ne5+ Ke8
18. O-O-O Qd6
19. Rxd3 Be6
20. Re3 Nbd7
21. Rhe1 Rxg2
22. Ne2 Rg8
23. Nf4 Bf5
24. Nh5 Ne4
25. Ng7+ Rxg7
26. Qxg7 Qf6
27. Qxf6 exf6
28. Nf3 Nf8
29. Nh4 Be6
30. f3 Ng5
31. c4 Kf7
32. c5 b6
33. b4 b5
34. Kb2 a5
35. Ra3 a4
36. Rae3 Nfh7
37. f4 Ne4
38. Rh1 Rg8
39. Rh2 Rg4
40. Rf3 Nf8
41. f5 Bd7
42. Rf1 Ng5
43. Kc3 Rg3+
44. Kc2 Ra3
45. Kb2 and White can't hold on to their b4-c5-d4 pawn chain. A real fight, though.

click for larger view

Sep-14-18  WorstPlayerEver: PS Playing the same variation again, but now Leela clearly evaluates her position down after 14. Qh8+

Interesting. So far Leela tried 5 different 5th moves after 1. e4 c6 2. d4 d5 3. Nc3 Nf6 4. e5 Nfd7


5. e6 fxe6 6. h4 g6 7. h5 Bg7 is the most defensive variation, which Leela has put down today with an early Queen sac.

However, 7... e5 is the next move I am going to try in this variation. Which is much tougher.

Sep-14-18  scholes: Results of Leela strength testing

Nearly as strong as sf

Sep-15-18  WorstPlayerEver: <scholes>

Thanks for the info!

1. e4 c6
2. d4 d5
3. Nc3 Nf6
4. e5 Nfd7
5. Nf3 e6
6. Ng5 Be7
7. Qh5 g6
8. Qh6 Bf8
9. Qh3 Be7
10. Qh6 Bf8
11. Qh3

click for larger view

This is a forced draw after 6 moves. Think I have proven that this variation is completely sound for Black! Leela sees no way to improve her position. However, she avoids repetition by playing 11... Be7. 12. f4, but that's a compromise for White.

5. e6, 5. f4, 5. h4, 5. Bd3, 5. Ne2, and now 5. Nf3. The latter is the natural move, I think.

Sep-15-18  WorstPlayerEver: ID 20799

1. e4 c6
2. d4 d5
3. Nc3 Nf6
4. e5 Nfd7
5. Nf3 e6
6. Ng5 Na6
7. Bd3 Be7
8. Qh5 g6
9. Qh6 Bf8
10. Qh3 Be7
11. Nxe6 Qb6
12. Ng7+ Kd8
13. e6

click for larger view

13... Qxd4
14. exd7 Bxd7
15. Qh6 Rg8
16. Bxa6 bxa6
17. O-O Qxg7
18. Qd2 Qf6
19. Re1 Bd6
20. Qd3 Re8
21. Be3 Kc7
22. Rad1 Qe5
23. f4 Qf5
24. Qxa6 Rxe3
25. Qa5+ Kb7
26. Rxe3 Qxf4
27. Nxd5 Qxh2+
28. Kf2 Qh4+
29. Kg1=

click for larger view

Again, an illustration of the soundness of this variation. 10... Be7 is not the best, but it shows Black's defense is completely safe; it holds the draw. Black can stand the wildest storms.

Premium Chessgames Member
  phenstyle: Anyone know where I can find a database of leela's serious games against other 3000+ engines? I want to be able to play through the moves quickly through a few hundred games or so.
Sep-16-18  WorstPlayerEver: <phenstyle>

On the page of the link <scholes> gave, there's an archive link above the first comment FYI

Premium Chessgames Member
  AylerKupp: <<scholes> Nearly as strong as sf>

No, not even close. It's the Alpha Zero / Stockfish 8 match story all over again. As I indicated in AlphaZero vs Stockfish, 2017 (kibitz #37), in the Alpha Zero vs. Stockfish "demonstration" (Deep Mind apparently never called it a "match") Alpha Zero ran on a system containing 4 custom made and proprietary Tensor Processing Units (TPUs) that had a total performance capacity of about 180 TFlops while Stockfish 9 ran on (presumably) a Linux system composed of 16 i5 4-core processors with a total performance capacity of about 1.68 TFlops. So Alpha Zero had a > 100X processor power advantage over Stockfish 8 and yet it only managed to win 28 out of 100 games! So I would say that the results of the Alpha Zero vs. Stockfish "demonstration" to be "inconclusive", and I think that's being very charitable to Alpha Zero. Let's see how it does against Stockfish running on a computer system of comparable performance to its 4 TPU system before we make any definitive conclusions.

The LC0 + 11 engines tournament indicated in your link had LC0 running on a GEForce 980M GPU rated at 3,189 GFLops ( and the other engines were running on an Intel i7 6820 system but running only using 1 thread and with an average performance of 3.61 GFlops ( And with an over 800X processor performance capability advantage it could only draw the 4 games that it played against Stockfish!

I think that LC0 has a long way to go before it can come even close to beating or outscoring even the lowest performing engines in this tournament if they both ran on hardware of comparable performance capability. But that's just my opinion; I have no data to substantiate it.

A much fairer comparison of hardware performance configurations is used in the TCEC Season 13 tournament. LC0 and Deus X (another neural network-based engine) were running on a system composed of 2 GEForce 1080 TI GPUs with a total performance capacity of 21.2 GFlops ( and the non-neural net engines were running on an Intel Xeon E5-2609 v4 system consisting of two 22-core processors with a total performance capacity of 23.9 TFlops ( A much more equal and fairer configuration with which to make meaningful comparisons!

Unfortunately, because of the TCEC tournament setup, the NN engines and Stockfish were playing in different divisions and therefore they didn't face each other. Not only that, but there was a malfunction in the GEForce system so both LC0 and Deus X played with a diminished performance capability so, even if they had both played against Stockfish, no meaningful comparisons between them could have been made. With the current TCEC format we'll just have to wait until LC0, Deus X, and other NN-based engines make their way up the ladder to the Premier Division (4 years!).

Or maybe the TCEC organizers could be persuaded to conduct a "demonstration" between LC0 and Stockfish using the current system configurations, once the problems with the GEForce system are corrected. Perhaps if enough chess enthusiasts petition the TCEC organizers (or, probably more important, contribute to the TCEC) such a match can be organized in the not too distant future. They might even be convinced to stage such a yearly event to evaluate the relative performance of "classic" vs. NN-based engines. I'm sure that most of us would be interested in that.

Sep-17-18  scholes: They have already conducted many bonus games between SF Dev and lc0. Around 24 games have been conducted with latest nets. In those games lc0 scored +1 -2 =21 against SF. With lc0 playing on one or two v100 and SF playing on 43 cores

A recent match between lc0 playing on 2 v100 and SF dev playing on 92 cores led to 3 draws.

When you compare GFLOPS of gpu with cpu, it is one way of thinking. But I am sure you being programmer , I don't have to explain to you why inherently gpu will be always faster than cpu.

If it was possible to make a nn as strong as stockfish while both run on cpu. Then it would absolutely blow away stockfish while running on gpu. But since deepmind never made such nn, it is likely that there won't be such a nn.

Ultimately it is one new way of making chess engine. With most of excitement on what it could discover as new theory. is running a new chess tournament. Lc0 is also. With all games being played without book. In lc0-houdini game the two engines played 20 moves of theory of botvinnik semi Slav opening on their own.

I think best nets of lc0 are around 2900 elo level on cpu.

Sep-17-18  scholes: games here

Premium Chessgames Member
  AylerKupp: <<scholes> They have already conducted many bonus games between SF Dev and lc0. > (part 1 of 2>

<But I am sure you being programmer , I don't have to explain to you why inherently gpu will be always faster than cpu.>

No you don't, and they are not inherently faster. It all depends on the application; if the application can be greatly parallelized then GPUs will most likely be much faster. But, if it cannot (or at least not greatly), the GPUs will most likely be slower. For chess engine applications, where node evaluations can be highly parallelized and pipelined along with searching, the GPU will likely be faster. And for neural network applications, where the input propagation can be highly parallelized and the data propagation through the various NN layers can be highly pipelined, GPUs will be much faster than GPUs.

Which is exactly the point I was trying to make. If the object of the comparison is to determine the merits of a <software> chess engine, then attempting to make a comparison between two engine with different approaches when one of them is executing in much faster <hardware>, is not very useful. It's like trying to determine the better of two runners when one of them is running uphill and the other one is running downhill. Heck, I might be able to beat Usain Bolt if he were running up a sufficiently steep incline and I would be running down an equally steep descent.

<When you compare GFLOPS of gpu with cpu, it is one way of thinking.>

Yes, and not a very good way, but I don't think that there is another alternative. Much better IMO is to compare them using synthetic benchmarks that simulate the application for which you are trying to determine which is a better fit, a CPU or a GPU. But the practical problem is that CPU benchmarks typically try to simulate the types of applications that are usually run on CPUs and GPU benchmarks typically try to simulate the types of applications that are usually run on GPUs. You are not likely to find benchmarks for rendering algorithms that run on CPUs since no one would attempt to do heavy rendering on a CPU.

So I think we're stuck with Flops, whether Giga- or Tera- when attempting to compare architectures as dissimilar as CPUs and GPUs. And there are so many other computer architectural factors that come into consideration that the comparison is not likely to be very exact or accurate. But, when comparing flops, if one of the two computing units is 100X faster than the other, then I don't think that a valid <software> computationally effectiveness can be made. As we used to say, if the hardware is fast enough, there are no software bottlenecks.

<A recent match between lc0 playing on 2 v100 and SF dev playing on 92 cores led to 3 draws.>

If SF was running on a system similar to the 44-core system used in TCEC Season 13 (about 24 TFlops) but with a little bit more than twice the number of cores, then that system had a theoretical performance of approximately 50 TFlops (see my post above). In comparison, an NVIDIA Tesla V100 has a theoretical performance between 14 TFlops (single precision calculations) and 112 TFlops (deep learning applications) ( So, if LC0 ran on two of these then its theoretical performance capability was somewhere in the 28 to 224 TFlop range. A pretty wide range but I suspect that the LC0 game playing implementation was a closer fit to the architecture of Deep Learning applications than applications using mostly single precision calculations. So perhaps the system running the LC0 chess engine in that match had a computational capability in the range of 100 200 TFlops, between 2X and 4X more than the system running the SF application.

Given that the results of the match were 3 draws, does that imply comparable playing strength between LC0 and SF? If the two chess engines were of approximately equal playing strength then, with that computational advantage, should LC0 have won all the games or even just one of them? I don't know but given the 2X to 4X computational advantage that LC0 should have enjoyed over SF I wouldn't want to arrive at any definitive conclusions. Particularly since 3 games is hardly a valid statistical sample.

Premium Chessgames Member
  AylerKupp: <<scholes> They have already conducted many bonus games between SF Dev and lc0. > (part 2 of 2>

<I think best nets of lc0 are around 2900 elo level on cpu.>

The most recent (9-16-18, CCRL 40/40 tournament shows a Stockfish 9 rating of 3370 when running in a single core in a 2.4 GHz AMD system, and the most recent (7-07-18, CEGT 40/120 tournament shows a Stockfish 9 rating of 3338 when running in a single core in a 3.4 GHz Intel system. So, if your recollection is accurate, that would seem to indicate that when both LC0 and Stockfish are running on single core computers that Stockfish 7 holds about at least 400 elo rating point advantage over LC0. Using the FIDE probability scoring tables that would indicate that the most likely score of a 30-game match between Stockfish 9 and LC0 would be 27.5 2.5. So, making these assumptions, I would not consider the current LC0 to be anywhere near Stockfish 9 in playing strength <if the match were held today>. Of course, LC0 is likely to improve faster than Stockfish 9's successors so if the match were held in the future the results would very well be different.

<Ultimately it is one new way of making chess engine. With most of excitement on what it could discover as new theory.>

I agree, and it is an exciting new development. But in order to discover a new theory I don't think that LC0's learning approach is likely to be as successful as Alpha Zero's approach. Alpha Zero learns by playing games against itself and is therefore not susceptible to human biases. LC0 and NN-based chess engines prior to AlphaZero learned by either playing games against humans or being input previously played master games between humans. So I think it will be more difficult for LC0 to discover new chess theories than it will be for AlphaZero, although that does not mean that it will prevent LC0 from becoming a very strong engine.

The new tournament you mention seems interesting. Do you have a link to it? It wasn't obvious to me after looking at the site.

Sep-18-18  scholes: tournament is here.

Currently stage 2 has started. Game that I mentioned was in stage 1. Stage 1 was bookless league of 24 engines. Top 8 are playing in stage 2.

Sep-29-18  WorstPlayerEver: WPC-Leela ID 21226

1. d4 d5
2. c4 e6
3. Nc3 Nf6
4. Nf3 dxc4
5. e4 Bb4
6. Bg5 b5
7. a4 c6
8. axb5 cxb5
9. e5 h6
10. Bh4 g5
11. Nxg5 hxg5
12. Bxg5 Nbd7
13. Qf3 Rb8
14. exf6 Qb6
15. Qe3 Bb7
16. h4 a5
17. f3 Kd8
18. Be2 a4
19. Kf1 Ra8
20. Ne4 Bxe4
21. fxe4 Kc7
22. e5 Ra5

click for larger view

It shows how unpredictable Leela is; in this case she got quickly overwhelmed by White.

Oct-03-18  scholes: Leela playing on 4v100 defeated Komodo playing on 46 cores in match for 3rd place playoff. Leela won 16-14.

In current tournament top 6 engines are playing odds game tournament. In first 30 games engines give f2 pawn odds to each other. Black player starts with extra pawn.

Leela defeated stockfish dev while giving pawn odds.

[Event "CCCC 1: Rapid Rumble (15|5) Bonus Games"] [Site ""]
[Date "2018.10.03"]
[Round "?"]
[White "Lc0 17.11089"]
[Black "Stockfish 220818"]
[Result "1-0"]
[BlackElo "2400"]
[ECO "C16"]
[Opening "French"]
[Time "09:33:10"]
[Variation "Winawer, Advance Variation, 1.e4 e6 2.d4 d5 3.Nc3 Bb4 4.e5"] [WhiteElo "2400"]
[TimeControl "900+5"]
[Termination "normal"]
[PlyCount "235"]
[WhiteType "human"]
[BlackType "human"]

1. f4 Nh6 2. f5 Nxf5 3. Nf3 Nh6 4. Ng1 Ng8 5. e4 e6 6. Nc3 d5 7. d4 Bb4 8. e5 Qh4+ 9. g3 Qd8 10. a3 Bf8 11. Be3 Ne7 12. Nf3 Nf5 13. Bg5 Be7 14. Bxe7 Nxe7 15. Bd3 h6 16. b4 b6 17. O-O O-O 18. Ne2 c6 19. a4 c5 20. c3 Nbc6 21. Qd2 Bd7 22. Rae1 Rc8 23. b5 Na5 24. Nf4 cxd4 25. cxd4 Qc7 26. Rc1 Qd8 27. Rxc8 Qxc8 28. Nh5 Nc4 29. Qf4 Ng6 30. Nf6+ Kh8 31. Qc1 Na5 32. Qd2 Nc4 33. Qc1 Na5 34. Bxg6 Qxc1 35. Rxc1 gxf6 36. Bh5 Nc4 37. exf6 Kg8 38. Ne5 Nxe5 39. dxe5 Rc8 40. Rxc8+ Bxc8 41. Be2 d4 42. Bf3 h5 43. Bxh5 Bb7 44. Kf2 Be4 45. Ke1 d3 46. Kd2 Kf8 47. Ke3 Bh7 48. Bd1 Ke8 49. g4 Kd8 50. Kd2 Ke8 51. Ke3 Kf8 52. Kf3 Ke8 53. Kf2 Kd7 54. Ke3 Kc7 55. h4 Kc8 56. h5 Kd7 57. Kf3 Ke8 58. Kf2 Kf8 59. Ke3 Kg8 60. Kf4 Kh8 61. Kf3 Bg8 62. Ke4 Bh7+ 63. Kf4 Kg8 64. Ke3 Kf8 65. Kf2 Be4 66. Ke3 Bh7 67. Bb3 Kg8 68. Bc4 d2 69. Kxd2 Be4 70. Be2 Bb7 71. g5 Be4 72. Bd1 Kh8 73. Kc3 Kg8 74. Be2 Bf5 75. Kd2 Bb1 76. Bd1 Kf8 77. Kc3 Ke8 78. Kb2 Bh7 79. Kc3 Be4 80. Kd2 Bf5 81. Ke3 Bb1 82. Kf3 Kf8 83. Ke3 Ke8 84. Kd4 Kf8 85. Kc3 Bh7 86. Be2 Kg8 87. Bg4 Be4 88. Bd1 Kh8 89. Kd2 Bh7 90. Bg4 Kg8 91. Bd1 Kf8 92. Bb3 Ke8 93. Bc4 Kd7 94. Be2 Ke8 95. Kc3 Kf8 96. Bd1 Bb1 3-fold repetition 97. Bg4 Be4 98. Kd2 Kg8 99. g6 fxg6 100. Bxe6+ Kf8 101. h6 g5 102. Ke3 Bg6 103. Bd5 g4 104. Kf4 g3 105. Kxg3 Ke8 106. e6 Kd8 107. Kf4 a6 108. bxa6 Be8 109. a7 Kc7 110. a8=Q Kd6 111. h7 Kc5 112. h8=R Bd7 113. Ke5 Kb4 114. Rh3 Bxa4 115. Kd4 Bd1 116. Bc6 Bb3 117.f7 b5 118. f8=Q# 1-0

Oct-03-18  nok: <Leela playing on 4v100 defeated Komodo playing on 46 cores in match for 3rd place playoff. Leela won 16-14.>

Unless they're graphical hybrids themselves, 46 cores should yield about 1 TFlops single-precision. A V100 is already 14 TF.

Premium Chessgames Member
  louispaulsen88888888: An interesting comment:

Hikaru Nakamura: "I think the research is certainly very interesting; the concept of trying to learn from the start without any prior knowledge so certainly it's a new approach and it worked quite well obviously with go. It's definitely interesting. That being said, having looked at the games and understand[ing] what the playing strength was I don't necessarily put a lot of credibility in the results simply because my understanding is that AlphaZero is basically using the Google super computer and Stockfish doesn't run on that hardware; Stockfish was basically running on what would be my laptop. If you wanna have a match that's comparable you have to have Stockfish running on a super computer as well."

Jump to page #    (enter # from 1 to 10)
search thread:   
< Earlier Kibitzing  · PAGE 10 OF 10 ·  Later Kibitzing>
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No posting personal information of members.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.

NOTE: Keep all discussion on the topic of this page. This forum is for this specific player and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of, its employees, or sponsors.
Spot an error? Please suggest your correction and help us eliminate database mistakes!

home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | contact us
Copyright 2001-2018, Chessgames Services LLC