On December 4th, 2017, Google Headquarters in London applied their DeepMind AI project to the game of chess. The event was more of an experiment than a chess exhibition, and the results are groundbreaking in both the fields of computing and chess. ... [more]
Player: Stockfish (Computer)
| page 1 of 1; 10 games
< Earlier Kibitzing · PAGE 12 OF 12 ·
|Dec-26-17|| ||Marmot PFL: Unlike chess, bridge is an incomplete information game where teamwork is also very important.
<The question whether bridge-playing programs will reach world-class level in the foreseeable future is not easy to answer. Computer bridge has not attracted an amount of interest anywhere near to that of computer chess. On the other hand, researchers working in the field have accomplished most of the current progress in the last decade.>
|Dec-26-17|| ||ChessHigherCat: <Marmot PFL> computers are definitely gifted at remaining poker-faced but you would need special peripherals so they could give their partners footsy signals under the table. |
Every poker player I know claims that the psychology of gestures and bluffing are a big part of the game. Computers could be programmed to bluff based on the expectations raised by their past playing patterns but they would be a bit limited when it comes to gestures. Anyway, it would be interesting to let a computer player "sit down" at the table with professionals to see how important the psychological aspects really are.
|Dec-26-17|| ||OhioChessFan: <CHC: Why on earth would you take it for granted that that I don't know anything about heuristics or probabilities, that's like some kind of stupid racial prejudice. >|
Looking at your picture, I can't tell if you think <AK> is prejudiced against your black or your yellow skin.
|Dec-26-17|| ||OhioChessFan: <Marmot: Why hire a composer or pay for the rights to use music when a computer can do the job for pennies on the dollar?>|
Unemployed composers don't have the money to buy computers. Just saying. But high tech professionals are hopelessly clueless in discussing such things. I have a brother who is an electrical engineer who is stunningly-perhaps purposely-oblivious of the employment ramifications of technological innovation.
|Dec-26-17|| ||ChessHigherCat: <OhioChessFan: Looking at your picture, I can't tell if you think <AK> is prejudiced against your black or your yellow skin.>|
Most people find all my skin colors a'peeling.
|Dec-26-17|| ||WorstPlayerEver: Yeah, I wrote this song back in 1952 with CSIRAC|
|Dec-27-17|| ||LameJokes55: <AylerKupp:Given that AlphaZero uses specialized hardware (Tensor Processing Units or TPUs), and a proprietary one at that, I doubt that there will be PCs available with these anytime in the near future.>|
1 Fair Match: If that's the case, then how do we create a level-playing field for both opponents (AZ and SF)? By loading original (and not truncated) version of SF on a super computer. Then, let AZ evolve itself with months of self-teaching. Both giants then square up for a 100-game rematch. A repeat performance by AZ in the second match would lend legitimacy to the current result. Of course, this is a wishful thinking on my part. There is no guarantee of the rematch in near future.
2 Application of Self-teaching: Google does not plan to enter chess or go market. They have conducted their media communication in a hush-hush manner. Mystery and suspense could give Harlan Coben a run for the money!
Where could they try this (self-teaching) program, next?
Self-teaching is applied to human sport where rules are few and evolution manageable. I don't think, it could be applied to something like language or writing, being too indefinite.
As one kibitzer pointed out, music might represent such an opportunity. Human ear hears sound frequency in the range of 20 to 20,00 hz. In addition, there are just 12 musical notes. Google program could use this information to work out various permutations and combinations. Thus, creating a number of musical compositions.
How melodious that ends up on human ear is altogether another matter!
|Dec-27-17|| ||ChessHigherCat: <Lame Joke 55: Where could they try this (self-teaching) program, next?|
I don't think, it could be applied to something like language or writing, being too indefinite.
As one kibitzer pointed out, music might represent such an opportunity. Google program could use this information to work out various permutations and combinations. Thus, creating a number of musical compositions.
How melodious that ends up on human ear is altogether another matter!>
Right, that's why human coaching would be necessary for anything beyond binary decisions (like winning move vs. losing move). Computers can generate beautiful fractal images based on mathematical formulas but that's a "no-brainer", pure calculation. There seems to be an insurmountable barrier between quantitative and qualitative thought requiring judgement and imagination, but who knows, maybe if the computing power reaches a certain level on a "neural network", computers will really be able to think like humans (for what it's worth :-) All the science fiction writers thought there would be "thinking androids" by this time. Maybe that's completely unrealistic, though, just as no computer programs will probably ever be able to predict cloud formation, there are just too many variables. Some people were so optimistic that they thought it would be possible to hear original performances by Mozart by retracing the vibrations of the air molecules produced by the instruments back to the 18th Century :-) Quantum theory has pretty much destroyed that kind of absolute determinism.
|Dec-27-17|| ||markz: <LameJokes55:
1 Fair Match: If that's the case, then how do we create a level-playing field for both opponents (AZ and SF)? By loading original (and not truncated) version of SF on a super computer. Then, let AZ evolve itself with months of self-teaching. Both giants then square up for a 100-game rematch. A repeat performance by AZ in the second match would lend legitimacy to the current result. Of course, this is a wishful thinking on my part. There is no guarantee of the rematch in near future.>
Even I think SF on a supercomputer can beat AZ, it still isn't a fair match. Because AZ has home court advantage on supercomputer, SF has home court advantage on PC. A real fair match should be something like 100 games on supercomputer, and 100 games on PC.
I am very sure SF can beat AZ easily in any real fair matchs. Also, I believe most people would like to use chess engines on their PC instead of supercomputer. Developing and running chess engines on supercomputer aren't very useful.
Increasing the training time may not improve the neural network. As shown in figure 1 of the paper, after 300K training steps, further training may not be very helpful, sometimes (very often) even harmful.
|Dec-28-17|| ||refutor: well at least it beat the French twice, makes me happy|
|Dec-28-17|| ||50movesaheadofyou: What will AZ tell us in the future if it learns and masters other disciplines? Human sexuality, the true story of jesus, its opinion on abortion, origins of the universe, etc. etc.
When asked about religion, what if it insults Christianity or othere faiths?
Many authorities will want it banned, even destroyed. It can be a dangerous device.
I say continue the research, let it show us its max potential. The same they said of the internet, that it could destroy the world. It changed the world for sure it occupies a dominant place in modern society but hasn't destroyed it. Maybe in the future some highly advanced self-taught version of it will though.|
|Dec-28-17|| ||ChessHigherCat: It's dubious whether anything self-taught based on self-consistent principles could ever become controversial because it can never encounter any outside influences, but if it contains contradictory principles to begin with then it could become "heretical". For example in medieval Scholasticism, the professors at the University of Paris tried to reconcile Aristotle with the Gospels and the brightest professors who picked up on all the contradictions, like Pierre Abelard, were criticized as heretical for "trying to look God in the face".|
|Jan-10-18|| ||zborris8: I've read comments that members aren't able to replicate the mistakes by Stockfish on their own systems.|
|Jan-10-18|| ||WorstPlayerEver: A theory of conditions. Let's elaborate.
There could be an application which plays perfect chess. Theoretically.
Let's imagine there is such program. There are 20 moves, so there are 20 best variations. Without question.
Another theory; since we do not have such program, it could be there is a program, at some point, but how do we know it found the best moves?
It could be that some point the best variations are played and the engine will recognize them as such. Which in practice means it eventually 'stumbles upon' the best variations.
However, it could be also the case that a program gets eventually an update and will predict the best variation from the start position and recognize it as such.
|Jan-11-18|| ||coxscorner: I wonder why they didn't release any of the draws? Surely some of those were interesting.|
|Jan-11-18|| ||AylerKupp: <<zborris8> I've read comments that members aren't able to replicate the mistakes by Stockfish on their own systems.>|
That wouldn't surprise me. Multi-core chess engines are notoriously non-deterministic. If you run successive analyses of the same position using the same chess engine on the same computer system to the same search depth, you will get different results. Not MAY, WILL. Guaranteed. The reason seems to be that since practically every engine (except AlphaZero) uses alpha-beta pruning to increase their efficiency in transversing the search tree, and since the efficiency of alpha-beta pruning is highly dependent on move ordering, different move orders will give different results. The different move orderings are in turn the consequence of the various chess engine threads or processes being interrupted in a non-deterministic way by various operating system processes.
So goes the theory. I tested it almost a year ago by analyzing the following position from W So vs Onischuk, 2017 after 13...Bxd4:
click for larger view
You can see the results starting at US Championship (2017). In summary, I several analyses of the same position with Houdini 4, Komodo 10, and Stockfish using 1, 2, 3, and 4 threads. The results were consistent; using only 1 thread all 3 engines were deterministic, giving the same result per engine for each of the analyses. But with 2, 3, or 4 threads the results were non-deterministic; each analysis resulted in different evaluation and, on occasion, different move rankings.
Granted, it was only one position on one game with a limited (3) analyses by each engine using 1, 2, 3, and 4 threads. But these types of results have been found in many other team games here at <chessgames.com> with different positions and using different engines.
So, if it's not possible to get the same results using 2, 3, or 4 threads, image the small probability of getting the same results using 64 threads as Stockfish used in the AlphaZero exhibition.
|Jan-11-18|| ||todicav23: I think it is pointless to keep discussing about this "private" match. A fair match needs to take place in order to conclude that AlphaZero is clearly stronger than Stockfish. Both engines have to run on
It is true that AlphaZero played some amazing games. But if you look at TCEC you can also find many amazing games.
|Jan-11-18|| ||AylerKupp: <<WorstPlayerEver> Another theory; since we do not have such program, it could be there is a program, at some point, but how do we know it found the best moves?>|
Aaaah, theirs is the crux of the problem, even if we could agree on a definition of what constitutes the "best move". That's why I laugh at attempts to determine who the best chess players were by comparing the moves they made in their games against the "best move" that <a> chess engines suggests. Given that different top engines give different evaluations and move rankings, and that any multi-core engine will give different evaluations and move rankings of the same position when they conduct analyses at different time, how does anyone know what the "best move" really is?
|Jan-11-18|| ||AylerKupp: <<cosxcorner> I wonder why they didn't release any of the draws? Surely some of those were interesting.>|
You have to remember what Google's motivation for the exhibition was, to gain publicity for the chess playing approach used by AlphaZero. So clearly they were most interested in showing AlphaZero in the best possible light. It's not that different than a grandmaster writing a book about his best games; you are not likely to find many, if any, of his loses there. And, for that matter, only 10 of AlphaZero's games were released, even though AlphaZero had 28 wins. Presumably the other 18 wins did not show AlphaZero at its best. So, given that more than half of AlphaZero's wins did not show it to its best advantage, it's probably safe to assume that the games in which it failed to win did not do that either.
|Jan-11-18|| ||ChessHigherCat: < todicav23: I think it is pointless to keep discussing about this "private" match. A fair match needs to take place in order to conclude that AlphaZero is clearly stronger than Stockfish. Both engines have to run on equivalent hardware.>|
It's all a "faux débat". The games were really intended to show the advantages of "tabula rasa" reinforcement learning" in lots of different games and other areas, and chess was just one of them:
<The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, ***the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains****. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case>
If anybody's interested in doing benchmark testing on their own terms it should be simple enough, there's no use bitching about how the testing was done for this research paper. But if you think enabling the tablebases and using more powerful software will greatly improve SF's performance, you also have to consider that a commercial version of AlphaZero would be based on tens of thousands of hours of self-teaching, not just 24!
|Jan-11-18|| ||nok: <clearly they were most interested in showing AlphaZero in the best possible light. It's not that different than a grandmaster writing a book about his best games>|
Still, concealing the information is hardly scientific.
|Jan-11-18|| ||AylerKupp: <<nok> Still, concealing the information is hardly scientific.>|
True, but this was never intended by Google to be a scientific test. And, to be fair, Google never claimed that it was intended to be such. Had it been, then the testing conditions would have been much more rigorously defined and all the results published. As it was, it left many questions unanswered. Which is unfortunate, but that's the way it is.
|Jan-11-18|| ||john barleycorn: <nok: ...
Still, concealing the information is hardly scientific.>
"Concealing information" has nothing to do with science but with the ego of the scientists or their sponsors.
|Jan-11-18|| ||WorstPlayerEver: I would not say it's not a scientific project; although it's kind of limited, any contribution is welcome :)|
|Jan-11-18|| ||zborris8: <Ayler Kupp <Here is a summary of the move/line evaluations, in descending order of average evaluation, for each of the 3 analyses:|
Move A1 A2 A3 <Avg> <StdDev>
-------- -------- ----------- --------- -------- ----------
14.f4 [+0.38] [+0.33] [+0.34] <[+0.35]> <0.022>
14.d3 [+0.22] [+0.42] [+0.37] <[+0.34]> <0.085>
14.Qd5 [+0.17] [+0.41] [+0.23] <[+0.27]> <0.102>
14.Qa6 [+0.24] [+0.31] [+0.22] <[+0.26]> <0.039>
14.Qc6 [+0.25] [+0.22] [+0.16] <[+0.21]> <0.037>>
<So, if it's not possible to get the same results using 2, 3, or 4 threads, image the small probability of getting the same results using 64 threads as Stockfish used in the AlphaZero exhibition.>>
Interesing. Thanks for the clear explanation!
< Earlier Kibitzing · PAGE 12 OF 12 ·
Spot an error? Please suggest your correction and help us eliminate database mistakes!
NOTE: You need to pick a username and password to post a reply.
Getting your account takes less than a minute, totally anonymous,
and 100% free--plus, it
entitles you to features otherwise unavailable.
Pick your username now and join the chessgames community!
If you already have an account, you should
Please observe our posting guidelines:
- No obscene, racist, sexist, or profane language.
- No spamming, advertising, or duplicating posts.
- No personal attacks against other members.
- Nothing in violation of United States law.
- No posting personal information of members.
See something that violates our rules? Blow the whistle and inform an administrator.
NOTE: Keep all discussion on the topic of this page.
This forum is for this specific tournament and nothing else. If you want to discuss chess in general, or
this site, you might try the Kibitzer's Café.
posted by Chessgames members do not necessarily represent the views of Chessgames.com, its employees, or sponsors.|
your profile |
Premium Membership |
Kibitzer's Café |
Biographer's Bistro |
new kibitzing |
Tournament Index |
Player Directory |
Notable Games |
World Chess Championships |
Opening Explorer |
Guess the Move |
Game Collections |
ChessBookie Game |
Chessgames Challenge |
privacy notice |
Copyright 2001-2018, Chessgames Services LLC