chessgames.com
Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

🏆
MATCH STANDINGS
AlphaZero - Stockfish Match

AlphaZero (Computer)10/10(+10 -0 =0)[games]
Stockfish (Computer)0/10(+0 -10 =0)[games]

Chessgames.com Chess Event Description
AlphaZero - Stockfish (2017)

On December 4th, 2017, Google Headquarters in London applied their DeepMind AI project to the game of chess. The event was more of an experiment than a chess exhibition, and the results are groundbreaking in both the fields of computing and chess.

Rather than relying on the classic "alpha-beta algorithm" common to conventional chess software, AlphaZero uses a deep neural network and is trained solely by reinforcement learning from games of self-play. It scans only 80,000 positions per second compared to Stockfish's 70 million. AlphaZero played Stockfish 100 games, winning 28 and drawing the rest.(1) A subset of the match, 10 games that AlphaZero won, was released to the public.

(1) Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm - https://arxiv.org/pdf/1712.01815.pdf

 page 1 of 1; 10 games  PGN Download 
Game  ResultMoves YearEvent/LocaleOpening
1. Stockfish vs AlphaZero 0-1672017AlphaZero - StockfishC65 Ruy Lopez, Berlin Defense
2. AlphaZero vs Stockfish 1-0522017AlphaZero - StockfishC11 French
3. AlphaZero vs Stockfish 1-0682017AlphaZero - StockfishE16 Queen's Indian
4. AlphaZero vs Stockfish 1-01002017AlphaZero - StockfishE16 Queen's Indian
5. AlphaZero vs Stockfish 1-0702017AlphaZero - StockfishE17 Queen's Indian
6. AlphaZero vs Stockfish 1-01172017AlphaZero - StockfishE17 Queen's Indian
7. AlphaZero vs Stockfish 1-0952017AlphaZero - StockfishC11 French
8. AlphaZero vs Stockfish 1-0602017AlphaZero - StockfishE15 Queen's Indian
9. Stockfish vs AlphaZero 0-1872017AlphaZero - StockfishC65 Ruy Lopez, Berlin Defense
10. AlphaZero vs Stockfish 1-0562017AlphaZero - StockfishE17 Queen's Indian
  REFINE SEARCH:   White wins (1-0) | Black wins (0-1) | Draws (1/2-1/2)  


TIP: You can make the above ads go away by registering a free account!

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 12 OF 12 ·  Later Kibitzing>
Dec-26-17
Premium Chessgames Member
  ChessHigherCat: <OhioChessFan: Looking at your picture, I can't tell if you think <AK> is prejudiced against your black or your yellow skin.>

Most people find all my skin colors a'peeling.

Dec-26-17
Premium Chessgames Member
  WorstPlayerEver: Yeah, I wrote this song back in 1952 with CSIRAC

https://m.youtube.com/watch?v=FH2Eg...

https://en.m.wikipedia.org/wiki/CSI...

Dec-27-17  LameJokes55: <AylerKupp:Given that AlphaZero uses specialized hardware (Tensor Processing Units or TPUs), and a proprietary one at that, I doubt that there will be PCs available with these anytime in the near future.>

1 Fair Match: If that's the case, then how do we create a level-playing field for both opponents (AZ and SF)? By loading original (and not truncated) version of SF on a super computer. Then, let AZ evolve itself with months of self-teaching. Both giants then square up for a 100-game rematch. A repeat performance by AZ in the second match would lend legitimacy to the current result. Of course, this is a wishful thinking on my part. There is no guarantee of the rematch in near future.

2 Application of Self-teaching: Google does not plan to enter chess or go market. They have conducted their media communication in a hush-hush manner. Mystery and suspense could give Harlan Coben a run for the money!

Where could they try this (self-teaching) program, next?

Self-teaching is applied to human sport where rules are few and evolution manageable. I don't think, it could be applied to something like language or writing, being too indefinite.

As one kibitzer pointed out, music might represent such an opportunity. Human ear hears sound frequency in the range of 20 to 20,00 hz. In addition, there are just 12 musical notes. Google program could use this information to work out various permutations and combinations. Thus, creating a number of musical compositions.

How melodious that ends up on human ear is altogether another matter!

Dec-27-17
Premium Chessgames Member
  ChessHigherCat: <Lame Joke 55: Where could they try this (self-teaching) program, next?

I don't think, it could be applied to something like language or writing, being too indefinite.

As one kibitzer pointed out, music might represent such an opportunity. Google program could use this information to work out various permutations and combinations. Thus, creating a number of musical compositions.

How melodious that ends up on human ear is altogether another matter!>

Right, that's why human coaching would be necessary for anything beyond binary decisions (like winning move vs. losing move). Computers can generate beautiful fractal images based on mathematical formulas but that's a "no-brainer", pure calculation. There seems to be an insurmountable barrier between quantitative and qualitative thought requiring judgement and imagination, but who knows, maybe if the computing power reaches a certain level on a "neural network", computers will really be able to think like humans (for what it's worth :-) All the science fiction writers thought there would be "thinking androids" by this time. Maybe that's completely unrealistic, though, just as no computer programs will probably ever be able to predict cloud formation, there are just too many variables. Some people were so optimistic that they thought it would be possible to hear original performances by Mozart by retracing the vibrations of the air molecules produced by the instruments back to the 18th Century :-) Quantum theory has pretty much destroyed that kind of absolute determinism.

Dec-27-17  markz: <LameJokes55:
1 Fair Match: If that's the case, then how do we create a level-playing field for both opponents (AZ and SF)? By loading original (and not truncated) version of SF on a super computer. Then, let AZ evolve itself with months of self-teaching. Both giants then square up for a 100-game rematch. A repeat performance by AZ in the second match would lend legitimacy to the current result. Of course, this is a wishful thinking on my part. There is no guarantee of the rematch in near future.>

Even I think SF on a supercomputer can beat AZ, it still isn't a fair match. Because AZ has home court advantage on supercomputer, SF has home court advantage on PC. A real fair match should be something like 100 games on supercomputer, and 100 games on PC.

I am very sure SF can beat AZ easily in any real fair matchs. Also, I believe most people would like to use chess engines on their PC instead of supercomputer. Developing and running chess engines on supercomputer aren't very useful.

Increasing the training time may not improve the neural network. As shown in figure 1 of the paper, after 300K training steps, further training may not be very helpful, sometimes (very often) even harmful.

Dec-28-17
Premium Chessgames Member
  refutor: well at least it beat the French twice, makes me happy
Dec-28-17  50movesaheadofyou: What will AZ tell us in the future if it learns and masters other disciplines? Human sexuality, the true story of jesus, its opinion on abortion, origins of the universe, etc. etc. When asked about religion, what if it insults Christianity or othere faiths? Many authorities will want it banned, even destroyed. It can be a dangerous device. I say continue the research, let it show us its max potential. The same they said of the internet, that it could destroy the world. It changed the world for sure it occupies a dominant place in modern society but hasn't destroyed it. Maybe in the future some highly advanced self-taught version of it will though.
Dec-28-17
Premium Chessgames Member
  ChessHigherCat: It's dubious whether anything self-taught based on self-consistent principles could ever become controversial because it can never encounter any outside influences, but if it contains contradictory principles to begin with then it could become "heretical". For example in medieval Scholasticism, the professors at the University of Paris tried to reconcile Aristotle with the Gospels and the brightest professors who picked up on all the contradictions, like Pierre Abelard, were criticized as heretical for "trying to look God in the face".
Jan-10-18
Premium Chessgames Member
  zborris8: I've read comments that members aren't able to replicate the mistakes by Stockfish on their own systems.
Jan-10-18
Premium Chessgames Member
  WorstPlayerEver: A theory of conditions. Let's elaborate.

There could be an application which plays perfect chess. Theoretically.

Let's imagine there is such program. There are 20 moves, so there are 20 best variations. Without question.

Another theory; since we do not have such program, it could be there is a program, at some point, but how do we know it found the best moves?

It could be that some point the best variations are played and the engine will recognize them as such. Which in practice means it eventually 'stumbles upon' the best variations.

However, it could be also the case that a program gets eventually an update and will predict the best variation from the start position and recognize it as such.

Jan-11-18  coxscorner: I wonder why they didn't release any of the draws? Surely some of those were interesting.
Jan-11-18
Premium Chessgames Member
  AylerKupp: <<zborris8> I've read comments that members aren't able to replicate the mistakes by Stockfish on their own systems.>

That wouldn't surprise me. Multi-core chess engines are notoriously non-deterministic. If you run successive analyses of the same position using the same chess engine on the same computer system to the same search depth, you will get different results. Not MAY, WILL. Guaranteed. The reason seems to be that since practically every engine (except AlphaZero) uses alpha-beta pruning to increase their efficiency in transversing the search tree, and since the efficiency of alpha-beta pruning is highly dependent on move ordering, different move orders will give different results. The different move orderings are in turn the consequence of the various chess engine threads or processes being interrupted in a non-deterministic way by various operating system processes.

So goes the theory. I tested it almost a year ago by analyzing the following position from W So vs Onischuk, 2017 after 13...Bxd4:


click for larger view

You can see the results starting at US Championship (2017). In summary, I several analyses of the same position with Houdini 4, Komodo 10, and Stockfish using 1, 2, 3, and 4 threads. The results were consistent; using only 1 thread all 3 engines were deterministic, giving the same result per engine for each of the analyses. But with 2, 3, or 4 threads the results were non-deterministic; each analysis resulted in different evaluation and, on occasion, different move rankings.

Granted, it was only one position on one game with a limited (3) analyses by each engine using 1, 2, 3, and 4 threads. But these types of results have been found in many other team games here at <chessgames.com> with different positions and using different engines.

So, if it's not possible to get the same results using 2, 3, or 4 threads, image the small probability of getting the same results using 64 threads as Stockfish used in the AlphaZero exhibition.

Jan-11-18  todicav23: I think it is pointless to keep discussing about this "private" match. A fair match needs to take place in order to conclude that AlphaZero is clearly stronger than Stockfish. Both engines have to run on equivalent hardware.

It is true that AlphaZero played some amazing games. But if you look at TCEC you can also find many amazing games.

Jan-11-18
Premium Chessgames Member
  AylerKupp: <<WorstPlayerEver> Another theory; since we do not have such program, it could be there is a program, at some point, but how do we know it found the best moves?>

Aaaah, theirs is the crux of the problem, even if we could agree on a definition of what constitutes the "best move". That's why I laugh at attempts to determine who the best chess players were by comparing the moves they made in their games against the "best move" that <a> chess engines suggests. Given that different top engines give different evaluations and move rankings, and that any multi-core engine will give different evaluations and move rankings of the same position when they conduct analyses at different time, how does anyone know what the "best move" really is?

Jan-11-18
Premium Chessgames Member
  AylerKupp: <<cosxcorner> I wonder why they didn't release any of the draws? Surely some of those were interesting.>

You have to remember what Google's motivation for the exhibition was, to gain publicity for the chess playing approach used by AlphaZero. So clearly they were most interested in showing AlphaZero in the best possible light. It's not that different than a grandmaster writing a book about his best games; you are not likely to find many, if any, of his loses there. And, for that matter, only 10 of AlphaZero's games were released, even though AlphaZero had 28 wins. Presumably the other 18 wins did not show AlphaZero at its best. So, given that more than half of AlphaZero's wins did not show it to its best advantage, it's probably safe to assume that the games in which it failed to win did not do that either.

Jan-11-18
Premium Chessgames Member
  ChessHigherCat: < todicav23: I think it is pointless to keep discussing about this "private" match. A fair match needs to take place in order to conclude that AlphaZero is clearly stronger than Stockfish. Both engines have to run on equivalent hardware.>

It's all a "faux débat". The games were really intended to show the advantages of "tabula rasa" reinforcement learning" in lots of different games and other areas, and chess was just one of them:

<The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, ***the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains****. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case> https://arxiv.org/pdf/1712.01815.pdf

If anybody's interested in doing benchmark testing on their own terms it should be simple enough, there's no use bitching about how the testing was done for this research paper. But if you think enabling the tablebases and using more powerful software will greatly improve SF's performance, you also have to consider that a commercial version of AlphaZero would be based on tens of thousands of hours of self-teaching, not just 24!

Jan-11-18  nok: <clearly they were most interested in showing AlphaZero in the best possible light. It's not that different than a grandmaster writing a book about his best games>

Still, concealing the information is hardly scientific.

Jan-11-18
Premium Chessgames Member
  AylerKupp: <<nok> Still, concealing the information is hardly scientific.>

True, but this was never intended by Google to be a scientific test. And, to be fair, Google never claimed that it was intended to be such. Had it been, then the testing conditions would have been much more rigorously defined and all the results published. As it was, it left many questions unanswered. Which is unfortunate, but that's the way it is.

Jan-11-18
Premium Chessgames Member
  john barleycorn: <nok: ...

Still, concealing the information is hardly scientific.>

"Concealing information" has nothing to do with science but with the ego of the scientists or their sponsors.

Jan-11-18
Premium Chessgames Member
  WorstPlayerEver: I would not say it's not a scientific project; although it's kind of limited, any contribution is welcome :)
Jan-11-18
Premium Chessgames Member
  zborris8: <Ayler Kupp <Here is a summary of the move/line evaluations, in descending order of average evaluation, for each of the 3 analyses:

White's
Move A1 A2 A3 <Avg> <StdDev>

-------- -------- ----------- --------- -------- ----------

14.f4 [+0.38] [+0.33] [+0.34] <[+0.35]> <0.022>

14.d3 [+0.22] [+0.42] [+0.37] <[+0.34]> <0.085>

14.Qd5 [+0.17] [+0.41] [+0.23] <[+0.27]> <0.102>

14.Qa6 [+0.24] [+0.31] [+0.22] <[+0.26]> <0.039>

14.Qc6 [+0.25] [+0.22] [+0.16] <[+0.21]> <0.037>>

<So, if it's not possible to get the same results using 2, 3, or 4 threads, image the small probability of getting the same results using 64 threads as Stockfish used in the AlphaZero exhibition.>>

Interesing. Thanks for the clear explanation!

Jan-29-18  dannygjk: SF 70,000,000 nps, A0 80,000 nps. btw A0 did not play it's games in the match on the same hardware as it trained on.
Mar-07-18  Pirandellus: Que devient-il, Alpha-Zero? Aucune nouvelle?
Mar-07-18
Premium Chessgames Member
  ChessHigherCat: <Pirandellus: Que devient-il, Alpha-Zero? Aucune nouvelle?>

Non, il est entré dans la clandestinité comme l'agent secret Zero-Zero-Alpha.

Mar-18-18
Premium Chessgames Member
  WorstPlayerEver: PS although the games make a somewhat constructed impression on me.
Jump to page #    (enter # from 1 to 12)
search thread:   
< Earlier Kibitzing  · PAGE 12 OF 12 ·  Later Kibitzing>
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No posting personal information of members.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.


NOTE: Keep all discussion on the topic of this page. This forum is for this specific tournament and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of Chessgames.com, its employees, or sponsors.
Spot an error? Please suggest your correction and help us eliminate database mistakes!


home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | contact us
Copyright 2001-2018, Chessgames Services LLC