AlphaZero - Stockfish (2017) |
On December 4th, 2017, Google Headquarters in London applied their DeepMind AI project to the game of chess. The event was more of an experiment than a chess exhibition, and the results are groundbreaking in both the fields of computing and chess. Rather than relying on the classic "alpha-beta algorithm" common to conventional chess software, AlphaZero uses a deep neural network and is trained solely by reinforcement learning from games of self-play. It scans only 80,000 positions per second compared to Stockfish's 70 million. AlphaZero played Stockfish 100 games, winning 28 and drawing the rest.(1) A subset of the match, 10 games that AlphaZero won, was released to the public. (1) Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm - https://arxiv.org/pdf/1712.01815.pdf
|
|
page 1 of 1; 10 games |
|
 |
|
|
< Earlier Kibitzing · PAGE 12 OF 12 ·
Later Kibitzing> |
Dec-27-17 | | LameJokes55: <AylerKupp:Given that AlphaZero uses specialized hardware (Tensor Processing Units or TPUs), and a proprietary one at that, I doubt that there will be PCs available with these anytime in the near future.> 1 Fair Match: If that's the case, then how do we create a level-playing field for both opponents (AZ and SF)? By loading original (and not truncated) version of SF on a super computer. Then, let AZ evolve itself with months of self-teaching. Both giants then square up for a 100-game rematch. A repeat performance by AZ in the second match would lend legitimacy to the current result. Of course, this is a wishful thinking on my part. There is no guarantee of the rematch in near future. 2 Application of Self-teaching: Google does not plan to enter chess or go market. They have conducted their media communication in a hush-hush manner. Mystery and suspense could give Harlan Coben a run for the money! Where could they try this (self-teaching) program, next? Self-teaching is applied to human sport where rules are few and evolution manageable. I don't think, it could be applied to something like language or writing, being too indefinite. As one kibitzer pointed out, music might represent such an opportunity. Human ear hears sound frequency in the range of 20 to 20,00 hz. In addition, there are just 12 musical notes. Google program could use this information to work out various permutations and combinations. Thus, creating a number of musical compositions. How melodious that ends up on human ear is altogether another matter! |
|
Dec-27-17 | | ChessHigherCat: <Lame Joke 55: Where could they try this (self-teaching) program, next? I don't think, it could be applied to something like language or writing, being too indefinite. As one kibitzer pointed out, music might represent such an opportunity. Google program could use this information to work out various permutations and combinations. Thus, creating a number of musical compositions. How melodious that ends up on human ear is altogether another matter!> Right, that's why human coaching would be necessary for anything beyond binary decisions (like winning move vs. losing move). Computers can generate beautiful fractal images based on mathematical formulas but that's a "no-brainer", pure calculation. There seems to be an insurmountable barrier between quantitative and qualitative thought requiring judgement and imagination, but who knows, maybe if the computing power reaches a certain level on a "neural network", computers will really be able to think like humans (for what it's worth :-) All the science fiction writers thought there would be "thinking androids" by this time. Maybe that's completely unrealistic, though, just as no computer programs will probably ever be able to predict cloud formation, there are just too many variables. Some people were so optimistic that they thought it would be possible to hear original performances by Mozart by retracing the vibrations of the air molecules produced by the instruments back to the 18th Century :-) Quantum theory has pretty much destroyed that kind of absolute determinism. |
|
Dec-27-17 | | markz: <LameJokes55:
1 Fair Match: If that's the case, then how do we create a level-playing field for both opponents (AZ and SF)? By loading original (and not truncated) version of SF on a super computer. Then, let AZ evolve itself with months of self-teaching. Both giants then square up for a 100-game rematch. A repeat performance by AZ in the second match would lend legitimacy to the current result. Of course, this is a wishful thinking on my part. There is no guarantee of the rematch in near future.>Even I think SF on a supercomputer can beat AZ, it still isn't a fair match. Because AZ has home court advantage on supercomputer, SF has home court advantage on PC. A real fair match should be something like 100 games on supercomputer, and 100 games on PC. I am very sure SF can beat AZ easily in any real fair matchs. Also, I believe most people would like to use chess engines on their PC instead of supercomputer. Developing and running chess engines on supercomputer aren't very useful. Increasing the training time may not improve the neural network. As shown in figure 1 of the paper, after 300K training steps, further training may not be very helpful, sometimes (very often) even harmful. |
|
Dec-28-17 | | refutor: well at least it beat the French twice, makes me happy |
|
Dec-28-17 | | 50movesaheadofyou: What will AZ tell us in the future if it learns and masters other disciplines? Human sexuality, the true story of jesus, its opinion on abortion, origins of the universe, etc. etc.
When asked about religion, what if it insults Christianity or othere faiths?
Many authorities will want it banned, even destroyed. It can be a dangerous device.
I say continue the research, let it show us its max potential. The same they said of the internet, that it could destroy the world. It changed the world for sure it occupies a dominant place in modern society but hasn't destroyed it. Maybe in the future some highly advanced self-taught version of it will though. |
|
Dec-28-17 | | ChessHigherCat: It's dubious whether anything self-taught based on self-consistent principles could ever become controversial because it can never encounter any outside influences, but if it contains contradictory principles to begin with then it could become "heretical". For example in medieval Scholasticism, the professors at the University of Paris tried to reconcile Aristotle with the Gospels and the brightest professors who picked up on all the contradictions, like Pierre Abelard, were criticized as heretical for "trying to look God in the face". |
|
Jan-10-18 | | zborris8: I've read comments that members aren't able to replicate the mistakes by Stockfish on their own systems. |
|
Jan-10-18 | | WorstPlayerEver: A theory of conditions. Let's elaborate.
There could be an application which plays perfect chess. Theoretically. Let's imagine there is such program. There are 20 moves, so there are 20 best variations. Without question. Another theory; since we do not have such program, it could be there is a program, at some point, but how do we know it found the best moves? It could be that some point the best variations are played and the engine will recognize them as such. Which in practice means it eventually 'stumbles upon' the best variations. However, it could be also the case that a program gets eventually an update and will predict the best variation from the start position and recognize it as such. |
|
Jan-11-18 | | coxscorner: I wonder why they didn't release any of the draws? Surely some of those were interesting. |
|
Jan-11-18
 | | AylerKupp: <<zborris8> I've read comments that members aren't able to replicate the mistakes by Stockfish on their own systems.> That wouldn't surprise me. Multi-core chess engines are notoriously non-deterministic. If you run successive analyses of the same position using the same chess engine on the same computer system to the same search depth, you will get different results. Not MAY, WILL. Guaranteed. The reason seems to be that since practically every engine (except AlphaZero) uses alpha-beta pruning to increase their efficiency in transversing the search tree, and since the efficiency of alpha-beta pruning is highly dependent on move ordering, different move orders will give different results. The different move orderings are in turn the consequence of the various chess engine threads or processes being interrupted in a non-deterministic way by various operating system processes. So goes the theory. I tested it almost a year ago by analyzing the following position from W So vs Onischuk, 2017 after 13...Bxd4:
 click for larger viewYou can see the results starting at US Championship (2017). In summary, I several analyses of the same position with Houdini 4, Komodo 10, and Stockfish using 1, 2, 3, and 4 threads. The results were consistent; using only 1 thread all 3 engines were deterministic, giving the same result per engine for each of the analyses. But with 2, 3, or 4 threads the results were non-deterministic; each analysis resulted in different evaluation and, on occasion, different move rankings. Granted, it was only one position on one game with a limited (3) analyses by each engine using 1, 2, 3, and 4 threads. But these types of results have been found in many other team games here at <chessgames.com> with different positions and using different engines. So, if it's not possible to get the same results using 2, 3, or 4 threads, image the small probability of getting the same results using 64 threads as Stockfish used in the AlphaZero exhibition. |
|
Jan-11-18 | | todicav23: I think it is pointless to keep discussing about this "private" match. A fair match needs to take place in order to conclude that AlphaZero is clearly stronger than Stockfish. Both engines have to run on
equivalent hardware.
It is true that AlphaZero played some amazing games. But if you look at TCEC you can also find many amazing games. |
|
Jan-11-18
 | | AylerKupp: <<WorstPlayerEver> Another theory; since we do not have such program, it could be there is a program, at some point, but how do we know it found the best moves?> Aaaah, theirs is the crux of the problem, even if we could agree on a definition of what constitutes the "best move". That's why I laugh at attempts to determine who the best chess players were by comparing the moves they made in their games against the "best move" that <a> chess engines suggests. Given that different top engines give different evaluations and move rankings, and that any multi-core engine will give different evaluations and move rankings of the same position when they conduct analyses at different time, how does anyone know what the "best move" really is? |
|
Jan-11-18
 | | AylerKupp: <<cosxcorner> I wonder why they didn't release any of the draws? Surely some of those were interesting.> You have to remember what Google's motivation for the exhibition was, to gain publicity for the chess playing approach used by AlphaZero. So clearly they were most interested in showing AlphaZero in the best possible light. It's not that different than a grandmaster writing a book about his best games; you are not likely to find many, if any, of his loses there. And, for that matter, only 10 of AlphaZero's games were released, even though AlphaZero had 28 wins. Presumably the other 18 wins did not show AlphaZero at its best. So, given that more than half of AlphaZero's wins did not show it to its best advantage, it's probably safe to assume that the games in which it failed to win did not do that either. |
|
Jan-11-18 | | ChessHigherCat: < todicav23: I think it is pointless to keep discussing about this "private" match. A fair match needs to take place in order to conclude that AlphaZero is clearly stronger than Stockfish. Both engines have to run on equivalent hardware.> It's all a "faux débat". The games were really intended to show the advantages of "tabula rasa" reinforcement learning" in lots of different games and other areas, and chess was just one of them: <The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, ***the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains****. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case>
https://arxiv.org/pdf/1712.01815.pdf
If anybody's interested in doing benchmark testing on their own terms it should be simple enough, there's no use bitching about how the testing was done for this research paper. But if you think enabling the tablebases and using more powerful software will greatly improve SF's performance, you also have to consider that a commercial version of AlphaZero would be based on tens of thousands of hours of self-teaching, not just 24! |
|
Jan-11-18 | | nok: <clearly they were most interested in showing AlphaZero in the best possible light. It's not that different than a grandmaster writing a book about his best games> Still, concealing the information is hardly scientific. |
|
Jan-11-18
 | | AylerKupp: <<nok> Still, concealing the information is hardly scientific.> True, but this was never intended by Google to be a scientific test. And, to be fair, Google never claimed that it was intended to be such. Had it been, then the testing conditions would have been much more rigorously defined and all the results published. As it was, it left many questions unanswered. Which is unfortunate, but that's the way it is. |
|
Jan-11-18 | | john barleycorn: <nok: ...
Still, concealing the information is hardly scientific.> "Concealing information" has nothing to do with science but with the ego of the scientists or their sponsors. |
|
Jan-11-18 | | WorstPlayerEver: I would not say it's not a scientific project; although it's kind of limited, any contribution is welcome :) |
|
Jan-11-18 | | zborris8: <Ayler Kupp <Here is a summary of the move/line evaluations, in descending order of average evaluation, for each of the 3 analyses: White's
Move A1 A2 A3 <Avg> <StdDev> -------- -------- ----------- --------- -------- ---------- 14.f4 [+0.38] [+0.33] [+0.34] <[+0.35]> <0.022> 14.d3 [+0.22] [+0.42] [+0.37] <[+0.34]> <0.085> 14.Qd5 [+0.17] [+0.41] [+0.23] <[+0.27]> <0.102> 14.Qa6 [+0.24] [+0.31] [+0.22] <[+0.26]> <0.039> 14.Qc6 [+0.25] [+0.22] [+0.16] <[+0.21]> <0.037>> <So, if it's not possible to get the same results using 2, 3, or 4 threads, image the small probability of getting the same results using 64 threads as Stockfish used in the AlphaZero exhibition.>> Interesing. Thanks for the clear explanation! |
|
Jan-29-18 | | dannygjk: SF 70,000,000 nps, A0 80,000 nps. btw A0 did not play it's games in the match on the same hardware as it trained on. |
|
Mar-07-18 | | Pirandellus: Que devient-il, Alpha-Zero? Aucune nouvelle? |
|
Mar-07-18 | | ChessHigherCat: <Pirandellus: Que devient-il, Alpha-Zero? Aucune nouvelle?> Non, il est entré dans la clandestinité comme l'agent secret Zero-Zero-Alpha. |
|
Mar-18-18 | | WorstPlayerEver: PS although the games make a somewhat constructed impression on me. |
|
Jul-01-18 | | PJs Studio: As chess players we use this groundbreaking computer achievement with “wanting more”. Our rights to this AZ tech is nil. The efforts will be used by NASA, design, eco-friendly power production (if we’re lucky) and possibly to solve questions that human beings are smart enough to ask, but not to quantify. It’s possible we will see advanced chess engines in the future from this experiment, but more likely chess will not be high on the list for such high level computing advancements. The damn thing has its own secutrity team (as it should). This is big. Bigger than chess. Darn SF and Houdini are PLENTY to show us the failings of even the strongest GMs anyway. We hardly need more to understand the game further as mere mortals. What we already received from this match may have been enough, for now anyway. As a mediocre expert I agree with you guys! I want to see AZ play MORE AND MORE! But simply put; it’s amazing potential dwarfs the importance of any contribution it can make to chess. Damn this spent four hours becoming stronger than 200 years of incredible human advancement in our game. FFS |
|
Jun-07-21 | | login:
Chess, Artificial Intelligence, and Epistemic Opacity https://inftars.infonia.hu/pub/inft... by Paul Grünke (academic and bridge player), 2019 (updated)
Abstract:
In this paper, I describe the technical differences between the two chess engines and based on that, I discuss the impact of the modeling choices on the respective epistemic opacities. I argue that the success of AlphaZero’s approach with neural networks and reinforcement learning is counterbalanced by an increase in the epistemic opacity of the resulting model. |
|
 |
 |
< Earlier Kibitzing · PAGE 12 OF 12 ·
Later Kibitzing> |
|
|
|
NOTE: Create an account today
to post replies and access other powerful features which are available only to registered users.
Becoming a member is free, anonymous, and takes less than 1 minute! If you already have a username,
then simply login login under your username now to join the discussion.
|
Please observe our posting guidelines:
- No obscene, racist, sexist, or profane language.
- No spamming, advertising, duplicate, or gibberish posts.
- No vitriolic or systematic personal attacks against other members.
- Nothing in violation of United States law.
- No cyberstalking or malicious posting of negative or private information (doxing/doxxing) of members.
- No trolling.
- The use of "sock puppet" accounts to circumvent disciplinary action taken by moderators, create a false impression of consensus or support, or stage conversations, is prohibited.
- Do not degrade Chessgames or any of it's staff/volunteers.
Please try to maintain a semblance of civility at all times.

NOTE: Please keep all discussion on-topic.
This forum is for this specific tournament only. To discuss chess or this site in general,
visit the Kibitzer's Café.
|
Messages posted by Chessgames members
do not necessarily represent the views of Chessgames.com, its employees, or sponsors.
All moderator actions taken are ultimately at the sole discretion of the administration. |
Spot an error? Please suggest your correction and help us eliminate database mistakes!
Copyright 2001-2023, Chessgames Services LLC
|