chessgames.com
Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

AlphaZero (Computer)
  
Number of games in database: 220
Years covered: 2017 to 2018

Overall record: +62 -11 =147 (61.6%)*
   * Overall winning percentage = (wins+draws/2) / total games in the database.

MOST PLAYED OPENINGS
With the White pieces:
 Queen's Indian (40) 
    E15 E17 E16 E18
 English (12) 
    A17 A15
 French Defense (12) 
    C11 C02 C14 C13 C18
 Nimzo Indian (12) 
    E21 E46 E53 E47 E41
 Queen's Pawn Game (9) 
    E00 D02 E10 A45
 Semi-Slav (9) 
    D43 D44 D45
With the Black pieces:
 Ruy Lopez (24) 
    C67 C65 C92 C95 C69
 Sicilian (7) 
    B94 B89 B67 B48 B90
 Giuoco Piano (6) 
    C50 C53
 King's Indian (5) 
    E60 E84 E87 E99 E81
 French Defense (4) 
    C11 C18 C13 C14
 Queen's Gambit Declined (4) 
    D31 D38 D39 D37
Repertoire Explorer

NOTABLE GAMES: [what is this?]
   AlphaZero vs Stockfish, 2017 1-0
   AlphaZero vs Stockfish, 2018 1-0
   AlphaZero vs Stockfish, 2017 1-0
   AlphaZero vs Stockfish, 2017 1-0
   AlphaZero vs Stockfish, 2018 1-0
   AlphaZero vs Stockfish, 2017 1-0
   Stockfish vs AlphaZero, 2017 0-1
   Stockfish vs AlphaZero, 2018 1/2-1/2
   Stockfish vs AlphaZero, 2018 0-1
   AlphaZero vs Stockfish, 2018 1/2-1/2

NOTABLE TOURNAMENTS: [what is this?]
   AlphaZero - Stockfish (2017)
   AlphaZero - Stockfish Match (2018)

GAME COLLECTIONS: [what is this?]
   Game Changer by keypusher
   Alphazero brilliancies by Elesius
   Stockfish - AlphaZero (2017) by hukes70
   AlphaZero by ThirdPawn

RECENT GAMES:
   🏆 AlphaZero - Stockfish Match
   Stockfish vs AlphaZero (Jan-18-18) 0-1
   Stockfish vs AlphaZero (Jan-18-18) 1/2-1/2
   AlphaZero vs Stockfish (Jan-18-18) 1/2-1/2
   AlphaZero vs Stockfish (Jan-18-18) 1/2-1/2
   Stockfish vs AlphaZero (Jan-18-18) 1-0

Search Sacrifice Explorer for AlphaZero (Computer)
Search Google for AlphaZero (Computer)


ALPHAZERO (COMPUTER)

[what is this?]

AlphaZero is an application of the Google DeepMind AI project applied to chess and Shogi. In late 2017 experiments, it quickly demonstrated itself superior to any technology that we would otherwise consider leading-edge.

(1) Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm - https://arxiv.org/pdf/1712.01815.pdf

https://www.chessprogramming.org/Al...

Wikipedia article: AlphaZero

Last updated: 2018-12-02 14:34:00

 page 1 of 9; games 1-25 of 220  PGN Download
Game  ResultMoves YearEvent/LocaleOpening
1. AlphaZero vs Stockfish 1-0682017AlphaZero - StockfishE16 Queen's Indian
2. Stockfish vs AlphaZero 0-1872017AlphaZero - StockfishC65 Ruy Lopez, Berlin Defense
3. AlphaZero vs Stockfish 1-01002017AlphaZero - StockfishE16 Queen's Indian
4. Stockfish vs AlphaZero 0-1672017AlphaZero - StockfishC65 Ruy Lopez, Berlin Defense
5. AlphaZero vs Stockfish 1-0702017AlphaZero - StockfishE17 Queen's Indian
6. AlphaZero vs Stockfish 1-01172017AlphaZero - StockfishE17 Queen's Indian
7. AlphaZero vs Stockfish 1-0562017AlphaZero - StockfishE17 Queen's Indian
8. AlphaZero vs Stockfish 1-0952017AlphaZero - StockfishC11 French
9. AlphaZero vs Stockfish 1-0522017AlphaZero - StockfishC11 French
10. AlphaZero vs Stockfish 1-0602017AlphaZero - StockfishE15 Queen's Indian
11. AlphaZero vs Stockfish  1-01052018AlphaZero - Stockfish MatchE16 Queen's Indian
12. AlphaZero vs Stockfish 1-0492018AlphaZero - Stockfish MatchE00 Queen's Pawn Game
13. Stockfish vs AlphaZero  ½-½582018AlphaZero - Stockfish MatchC50 Giuoco Piano
14. AlphaZero vs Stockfish  ½-½772018AlphaZero - Stockfish MatchE15 Queen's Indian
15. AlphaZero vs Stockfish  ½-½912018AlphaZero - Stockfish MatchD43 Queen's Gambit Declined Semi-Slav
16. AlphaZero vs Stockfish  ½-½1352018AlphaZero - Stockfish MatchE15 Queen's Indian
17. AlphaZero vs Stockfish  ½-½702018AlphaZero - Stockfish MatchE15 Queen's Indian
18. AlphaZero vs Stockfish 1-0552018AlphaZero - Stockfish MatchC57 Two Knights
19. Stockfish vs AlphaZero ½-½812018AlphaZero - Stockfish MatchB16 Caro-Kann, Bronstein-Larsen Variation
20. Stockfish vs AlphaZero ½-½672018AlphaZero - Stockfish MatchA45 Queen's Pawn Game
21. Stockfish vs AlphaZero ½-½1432018AlphaZero - Stockfish MatchB06 Robatsch
22. AlphaZero vs Stockfish ½-½1012018AlphaZero - Stockfish MatchB80 Sicilian, Scheveningen
23. AlphaZero vs Stockfish ½-½1122018AlphaZero - Stockfish MatchA26 English
24. AlphaZero vs Stockfish 1-0492018AlphaZero - Stockfish MatchE16 Queen's Indian
25. AlphaZero vs Stockfish 1-01122018AlphaZero - Stockfish MatchE00 Queen's Pawn Game
 page 1 of 9; games 1-25 of 220  PGN Download
  REFINE SEARCH:   White wins (1-0) | Black wins (0-1) | Draws (1/2-1/2) | AlphaZero wins | AlphaZero loses  

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 38 OF 38 ·  Later Kibitzing>
Jun-16-20  scholes: Another stockfish vs leela tcec superfinal is going to start in two days.
Jun-30-20  scholes: Mathew Sadler analysis of Leela immortal from ongoing TCEC superfinal

https://www.youtube.com/watch?v=jMl...

Game link here

https://tcec-chess.com/#div=sf&game...

Jun-30-20  scholes: Leela sacs a queen and three pawns for two knights
Jul-31-20
Premium Chessgames Member
  keypusher: <scholes: Leela sacs a queen and three pawns for two knights>

That is a mind-blowing game! Thanks for sharing.

Here are the moves for anyone who is curious:

1. d4 Nf6 2. c4 e6 3. Nf3 Bb4+ 4. Nbd2 O-O 5. a3 Be7 6. e4 d5 7. e5 Nfd7 8. Bd3 c5 9. h4 g6 10. O-O Nc6 11. Nb3 Bxh4 12. Bh6 Re8 13. Re1 cxd4 14. Qc2 dxc4 15. Bxc4 Nb6 16. Rad1 Bd7 17. Nc5 Rc8 18. b4 Nxc4 19. Qxc4 Be7 20. Ne4 Nxb4 21. Qxb4 Bxb4 22. axb4 f5 23. Nf6+ Kh8 24. Rxd4 Rc7 25. Red1 Re7 26. b5 b6 27. Kh2 Rb7 28. Ng5 Qc8 29. R1d2 Rc7 30. Rd6 Rb7 31. R2d4 Rc7 32. Rd1 Rb7 33. R6d4 Rc7 34. f4 Rb7 35. Nxe6 Rxe6 36. Nxd7 Kg8 37. Nf6+ Kf7 38. Rd8 Qc5 39. Nxh7 Re8 40. e6+ Rxe6 41. Ng5+ Kf6 42. Rf8+ Qxf8 43. Bxf8 Rc7 44. Rd4 Rb7 45. Kg3 Rc7 46. Rd3 Rb7 47. Kh4 Rc7 48. Kg3 Rc4 49. Rd7 Re3+ 50. Kf2 Rxf4+ 51. Kxe3 Ra4 52. Be7+ Ke5 53. Kf3 1-0

I would hate to be a pawn under LC0's command.

Jul-31-20  SChesshevsky: <scholes: Leela sacs a queen and three pawns for two knights>

I think this game is a nice example of how these learning-engines have a pretty good advantage over standard evaluation based engines. Seems basically these learning engines work backward. Starting with the preferable goal of winning or at least drawing, then just figuring out all the steps to get there from any given position thru trial and error. Of course, this takes millions of trial and error games and magnitudes more trial and error moves from any position. But once that learning is accomplished, the learning engine knows what gives it the best chance statistically for a win or at least a draw.

Appears the standard evaluation based engines work forward. From any given position it evaluates the programmed plus or minuses and calculates on how to proceed by looking at the position x number of moves ahead and seeing how that evaluation number compares with the current estimate. That evaluation makes no claim as to how the game will conclude. It just says who is deemed to be better and by how much.

This game seems to show the great benefit of knowing the odds of an outcome from any given position versus just estimating whose better.

Looks like at 21. Qxb4, LC0 shows something like a 65% winning percentage for the sac. Now that might not mean much if it's statistical learning base is 10 games. But assuming it's a statistically significant number, that's meaningful. More striking is at the same time Stockfish appears to show white has a negligible advantage. Pretty much an even game with assumed drawing chances.

Basically, Leela is saying "I know this sac works 65% of the time based on my millions of games experience. I don't even need to know why. All I know is that I've seen it plenty of times and I know it works."

Stockfish appears to be saying "The sac is interesting but I looked out X number of moves and calculated all the variations and I don't see you have anything."

Then just a few moves later it appears Stockfish admits "Looking a little farther now, that sac was really good." Leela rubs it in with "Told you so."

Seems these learning-engines are always going to have the house edge due to their statistically significant knowledge of what probably ends up happening in the future. Versus a standard evaluation with a seemingly limited horizon that has no clue as to a final outcome. Maybe it's a big advantage or maybe not so big. But it appears to be enough to tip the scales in learning's favor now plus a brighter future as it's statistical base grows.

Jul-31-20
Premium Chessgames Member
  keypusher: SChesshevsky: I finally looked at that video of the Dutch Defense game you posted, which was also mind blowing.
Aug-14-20  SChesshevsky: Not sure if I'd say these new machine learning engines, like AlphaZero and Leela, are that related to artificial intelligence.

Yes, they apparently do learn the game on their own. But it appears to take extraordinary computing power and massive trial and error. Seems the result is based more on brute force computing than anything else.

Think a case can be made that a standard evaluation based engine's basis and result is actually more "intelligent". At least in human terms. Plus they seemingly require noticeably less resources to work adequately.

Here's a somewhat chess related example of when artificial intelligence can apparently be quite dumb when not supported by brute force computing:

https://www.youtube.com/watch?v=KSj...

Oct-31-20  get Reti: Here is an interesting thought I had. We've found that although neural networks don't think as many moves deep as traditional engines, they perform better due to a more conceptual understanding of the position. Is there any two-player board game (maybe Go?) where this effect is even more true, where a computer that can only think maybe 2 or 3 moves ahead can outperform one that can think 20 moves ahead, due to a more "conceptual" understanding?
Nov-01-20
Premium Chessgames Member
  AylerKupp: <<SChesshevsky> Here's a somewhat chess related example of when artificial intelligence can apparently be quite dumb when not supported by brute force computing>

It's actually quite scary to me that AI-based algorithms are being used to take actions when no one can figure out how the algorithm arrived at the conclusions it did. In this case it did not result in any physical harm to anyone but wait until inadequately tested driverless cars are allowed on the roads without anyone actually knowing what they are going to do in situations that were not part of their training set.

But sometimes they do provide comic relief. Amazon uses AI-driven algorithms to determine what items you might be interested in buying besides the ones you have looked at. A couple of years ago I got an email from Amazon indicating that "Because of your browsing history you might benefit from the discounts available to you by joining Amazon Mom." I have no idea how my Amazon browsing history suggested that I would be interested in joining Amazon Mom but I then began getting emails suggesting breast pumps, diapers, etc. as possibly desirable purchases.

But now they have gone to the other extreme. Typically a day or two after I have looked at an item I get an email from Amazon indicating that "Because of your recent browsing history you might be interested in ..." and list the item that I had looked at previously. After all, unless I made a mistake, why wouldn't I be interested in an item I chose to look at? I wonder how many servers containing TPUs or GPUs were required to figure out that I might be interested in an item that I had previously looked at.

Nov-01-20
Premium Chessgames Member
  AylerKupp: <get Reti> I think that neural chess engines perform better than classic chess engines because of the overwhelming computational capabilites of the hardware they use, Tensor Processing Units (TPUs) as used by AlphaZero in its matches with Stockfish and Graphic Processing Units (GPUs) used by LeelaC0 in its recent TCEC matches with Stockfish. And if they have a better conceptual understanding of positions that's because their much superior computational capabilities allow them to determine it.

I had earlier estimated that AlphaZero enjoyed about an 80X computational capability advantage over Stockfish in their 2018 matches. And Deep Mind in their second paper (https://science.sciencemag.org/cont..., Figure 2) shows what happens when AlphaZero's computational performance advantage over Stockfish is decreased by reducing its allowable time/game. Somewhere between reducing AlphaZero's time/game from 1/10 to 1/30 of Stockfish's time/game, Stockfish begins to win the matches. By interpolating between a 1/30 and a 1/100 AlphaZero time/game reduction, I estimated that if AlphaZero was given only 1/80 time/game compared to Stockfish (thus equalizing the number of operations that each hardware system could execute in the same amount of time), Stockfish would have defeated AlphaZero by a greater margin than AlphaZero defeated Stockfish in their 2018 matches.

A similar situation exists with LeelaC0 in its TCEC matches with Stockfish. I estimated that LeelaC0 by virtue of its GPU support enjoyed about a 5X computational advantage over Stockfish given the configuration of the TCEC's CPU and GPU servers. Yet in the last 6 TCEC Superfinals LeelaC0 has only been able to defeat Stockfish 2 out of 6 times in spite of its much superior computational capability.

And I don't know why you say that "we've" found that neural networks (NN) don't think as many moves deep as traditional engines. NN-based engines typically use a version of a Monte Carlso Tree Search (MCTS) which in its pure form estimates the scoring probability for candidate move from each position by conducting simulated playouts of many games, thus actually looking at the results of each combination of moves through the end of the simulated games. And you can't look any deeper than that! Classic engines seldom get a chance to look to the end of the game unless they find a forced mate or reach positions where they can use tablebases.

But the actual champion in getting superior results with shallow searches has to be Capablanca. When he was once asked how many moves deep he looked he supposedly replied "Only one. But it's always the best move." Of course, he was probably joking, figuring out that any interviewer that would ask him such a question would not realize that Capablanca was pulling his leg.

Nov-01-20
Premium Chessgames Member
  0ZeR0: Is AlphaZero still being developed or have they abandoned it? And if the latter has Google stated any further intentions with their DeepMind AI project?
Nov-21-20
Premium Chessgames Member
  fredthebear: Anna Rudolf explains AlphaZero's attack: https://www.youtube.com/watch?v=nPe...
Nov-23-20
Premium Chessgames Member
  AylerKupp: <0ZeR0> I suspect that all development of AlphaZero has stopped and AlphaZero has been abandoned. After all, the effort was made to gain publicity and sales for Google's Tensor Processing Units and Deep Mind's neural network-based reinforcement training for use in other applications. So, like IBM with Deep Blue after beating Kasparov in 1997, Google/Deep Mind had achieved all that they hope to accomplish. And, after all, it's not like Google/Deep Mind was going to market and sell AlphaZero as a product. What would they possibly gain by continuing the AlphaZero efforts? What could they have achieved that would have topped their 2018 results against Stockfish?
Nov-23-20
Premium Chessgames Member
  keypusher: <
But now they have gone to the other extreme. Typically a day or two after I have looked at an item I get an email from Amazon indicating that "Because of your recent browsing history you might be interested in ..." and list the item that I had looked at previously. After all, unless I made a mistake, why wouldn't I be interested in an item I chose to look at? I wonder how many servers containing TPUs or GPUs were required to figure out that I might be interested in an item that I had previously looked at.>

<AK>

The funniest example of this sort of algorithm at work is me getting targeted ads for a Broadway show I had seen the night before. Unless I REALLY like it, once is enough. And if a show is that good, I can probably remember it myself.

Of course this hasn’t been a problem lately.

Nov-23-20
Premium Chessgames Member
  AylerKupp: <fredthebear> I treat Anna Rudolf's with a grain of salt. Yes, she's a very good player but she seems to talk like and AlphaZero groupie, with a great deal of anthropomorphizing and a "gee whiz" attitude full of gushing praise. Besides, she doesn't really know how AlphaZero works, even those who developed AlphaZero claim that they don't know the specifics of how AlphaZero's neural network works, a claim that I find very dubious since the values stored in each of the neural network nodes are recorded and the calculations performed to produce the results..

Sometimes she attributes very deep motivation to AlphaZero's moves and tries to rationalize them afterwards as I indicated in Hikaru Nakamura (kibitz #22678). The fact that AlphaZero simply made a mistake did not even occur to her.

Don't get me wrong, I like Anna Rudolf and video analyses in general. And I have even written to her to complement her. But, when it comes to AlphaZero, she becomes a cheerleader while losing all her objectivity and I don't think that anyone should take her comments in these videos too seriously.

Nov-25-20
Premium Chessgames Member
  AylerKupp: <<SChesshevsky> I think this game is a nice example of how these learning-engines have a pretty good advantage over standard evaluation based engines. Seems basically these learning engines work backward.>

Sorry for the delay in commenting but I just saw this comment when I was posting my last comment. But, no, these learning engines don't work backward; they "work forward" just like "standard evaluation based engines". I prefer to refer to the latter as "classic" chess engines because they operate in basically the same way as described in Shannon's classic paper "Programming a Computer for Playing Chess". (https://vision.unipv.it/IA1/Program...). With many refinements, of course.

In this paper, published in 1950 but written in 1949, Shannon describes the overall structure of such a computer program, a strategy for choosing a move in any position using the minimax algorithm, the use of an evaluation function consisting of many factors to assess each position, the use of search tree pruning, and many other ideas that are commonly incorporating in today's engines and have been for some time.

So both classic and neural network-based engines work somewhat similarly. From any given position they identify the "best" candidate moves to be investigated further. From the positions arising from each of those candidate moves they generate another set of candidate moves and repeat the process. And both types of chess engines effectively calculate a principal variation; i.e. the sequence of moves consisting of the best moves at each search depth.

Here is where the two types of chess engines differ. The classic chess engines evaluate the position at each node of the search tree using (typically) a hand-crafted evaluation function and select the principal variation using the minimax algorithm. The neural network-based engines select as the best move the move that had the greatest scoring percentage based on the results of simulated games (rollouts). But at no time do they work backwards.

But more and more there are hybrid chess engines. Komodo MCTS a uses handcrafted evaluation functions to evaluate each position, but then selects the move to be played using a variant of MCTS instead of minimax. And the reverse happens with Stockfish 12 and the new Komodo Dragon engines, they evaluate the position using a small neural network (NNUE) but then select the principal variation using minimax. And sometimes, like AlphaZero and LeelaC0, instead of rollouts they use the information in their neural nets to estimate the results that a rollout would produce.

The only situation I know of in which chess "engines" work backwards is endgame tablebase generation. Endgame tablebase generators start with the simplest terminal positions possible with the tablebase being generated; e.g. KPvK endgame, KPvKP endgame, KNvK endgame, etc. and recursively add pawns and pieces until the starting position is reached. This is called retrograde analysis. The FinalGen tablebase generator works similarly, except that only those terminal game positions consistent with the initial position being analyzed are considered; e.g. if the initial position contains kings, knights, and pawns then no position containing bishops, rooks, and queens are considered.

In fact, I did have a thought about creating an engine based on Capablanca's comment (possibly said tongue-in-cheek) that when analyzing he tried to envision advantageous positions which could be reached from the position he was analyzing. Then, if he wound one sufficiently advantageous, he would try to work backwards to see if that position could be reached. If he couldn't, then he would look at the next best advantageous position he could envision, etc.

This approach made me think that possibly, with enough computing power, a chess engine using this approach could be created. First a tablebase could be created of advantageous position <types> (e.g. a smothered mate) and, since neural networks can be very good at pattern recognition, determine the probability that a position with one of the cataloged patterns could be reached. Then, once the most advantageous reachable position could be found, use retrograde analysis to determine the moves needed to reach that advantageous position. And, as a side benefit, it would make the horizon effect a thing of the past since the engine would be working backwards, not forwards.

But, while interesting, I doubt that I will ever get around to creating such a chess engine. :-(

Nov-25-20
Premium Chessgames Member
  AylerKupp: If you are visiting this page you might be interested in the following amusing video: https://www.youtube.com/watch?v=wlj.... It graphically ranks the engines and their ratings progressively from 1985 to the end of 2019. It's fun to see the progression of engines such as Fritz, Shredder, Rybka, Houdini, Stockfish, Komodo, etc. and their rankings, and how they got stronger over time. Stockfish, Komodo, and Houdini became the Big 3 in 2013, taking turns on who was ranked #1 until 2017, when AlphaZero made its first appearance at #1.

But then it becomes puzzling. In early 2017 the video shows Stockfish ranked #5, behind AlphaZero, Houdini, Shredder, and Komodo, and it had Houdini ranked #2 at the end of 2017 although, admittedly, by only 1 rating point over Stockfish. Assuming that this video is correct (doubtful), it's then interesting that DeepMind chose to test AlphaZero against Stockfish instead of Houdini, unless the fact that Stockfish was Open Source and could be modified made it a much more desirable opponent.

Then in late 2018 the video rates Stockfish higher than AlphaZero! So I have no idea how the Elo "estimates" in this video are calculated.

And by the end of 2019 the video ranks Stockfish #1, LeelaC0 (presumably with GPU support) #2, Komodo #4, and AlphaZero #4. A very deluded team must have created this video.

Nov-30-20
Premium Chessgames Member
  0ZeR0: <AylerKupp> I’m always of the opinion that the more chess played, the merrier. Why play (and win in brilliant fashion) one match when you could do twenty. True, it may not be in Google’s best interest to keep displaying their incredible technology, but it would be in chess players best interest the world over.
Nov-30-20
Premium Chessgames Member
  AylerKupp: <0ZeR0> I would agree, and so would Google/DeepMind if they were in business to promote chess. But they are not, Google is in business to make money in many fields and DeepMind is in the business of advancing the state of the art in AI – and making money in the process. How much would additional games played by AlphaZero enhance their bottom line, particularly if the results were a foregone conclusion given the enormous computational performance advantage that AlphaZero would enjoy over its opponents assuming that each engine had the same amount of time on their clock? I doubt that it would be very much.

A similar situation existed with Deep Blue in 1997 after it (and its team) defeated Kasparov in their last match. Could the techniques used in Deep Blue be developed and commercialized for sale to a broad audience? Doubtful. Deep Blue was a specialized chess playing machine that could only play chess. Could it have been improved to play chess better? Probably, but what would have been the point? None, it had already beaten the world's best chess player. So Deep Blue never played another game, at least not in public, and was dismantled.

This was a different situation than with IBM's Watson. Its technology was sufficiently generalizable to other fields; e.g. health care, weather forecasting, tax preparation, and – surprise! – advertising. But if Watson's technology was only suitable to play Jeopardy! I doubt that Watson would have been further developed, although Watson did play a follow-on match against two members of the US House of Representatives. Not surprisingly, it won.

Dec-01-20
Premium Chessgames Member
  keypusher: What DeepMind is up to these days:

<Today, DeepMind announced that it has seemingly solved one of biology's outstanding problems: how the string of amino acids in a protein folds up into a three-dimensional shape that enables their complex functions. It's a computational challenge that has resisted the efforts of many very smart biologists for decades, despite the application of supercomputer-level hardware for these calculations. DeepMind instead trained its system using 128 specialized processors for a couple of weeks; it now returns potential structures within a couple of days.

The limitations of the system aren't yet clear—DeepMind says it's currently planning on a peer-reviewed paper and has only made a blog post and some press releases available. But the system clearly performs better than anything that's come before it, after having more than doubled the performance of the best system in just four years. Even if it's not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology.>

https://arstechnica.com/science/202...

Dec-03-20
Premium Chessgames Member
  AylerKupp: <keypusher> Thanks for the link to the article. A friend had sent me a link to another article on the same subject (AlphaFold) a few days ago, and I was surprised to (erroneously) think that they had used "128 processors" which I thought were just CPUs. Instead they used "128 machine learning processors" according to the article my friend sent me and "128 special processors" according to the article in your link. Clearly these were Google TPUs as used in the AlphaZero vs. Stockfish matches.

And I would think DeepMind would have used v3 TPUs (available since 2017) rather than the v2 TPUs (used for neural network training) and v1 TPUs (used for playing the chess games) used for the AlphaZero vs. Stockfish matches. To put it in perspective, the v3 TPU are approximately 2X as fast as the v2 TPU which is in turn about 2X as fast the v1 TPU. So AlphaFold had about a (128/4) * 4X ~ 128X more computational performance capability then the system used in the AlphaZero vs. Stockfish match games. Yet it apparently took AlphaFold "a couple of days" to calculate potential 3D protein structures instead of the 2 hours or so it took AlphaZero to play a chess game. So clearly calculating these 3D protein structures is much more computationally demanding than playing a chess game!

Dec-03-20
Premium Chessgames Member
  keypusher: <AK> Thanks for linking the computational power of this latest machine? (what should I call it?) to what A0 used. It was clear that enormous resources were needed for this protein project, but you helped give me a sense of how much.
Feb-14-21  Tadeusz Nida: Yo: COMPUTER PROGRAMMER WANTED: IF ORIGINAL SOURCE CODE IS CHANGED THRU COMPILATION INTO BINARY CODE, CAN SOME PROGRAM RETRIEVE THE ORIGINAL SOURCE CODE AND MAKE CHANGES TO COMPUTER? I NEED TO ADD THE LUBEK CASTLE 2000 TO CHESSMASTER 2000; THERE ARE TWO FILES ONLY: CM.EXE WHERE ENGINE IS AND CM.DAT WHERE INFO ON PIECES AND RULES ARE; CAN ANYBODY RECOMMEND ME GOOD DOS FORUM AND PROGRAMMER; WILL PAY REASONABLY BUT REMEMBER THIS IS FOR THE GOOD OF CHESS, FOR ITS PROGRESS~!
Feb-14-21  Tadeusz Nida: If computers are to play human for real championship, no ending databases allowed!
Feb-20-21  Tadeusz Nida: <yo... PROGRAMMER WANTED!!!

WILL PAY REASONABLY, BUT THIS IS ONLY FOR THE GOOD OF CHESS, WE DONT MAKE MONEY ON CHESS, WE LOSE MONEY; NEED COMPUTER PROGRAMMER TO MAKE LUBEK CASTLE 2000/0000 PROGRAM IF POSSIBLE ADJUST CHESSMASTER 2000 TO PLAY IT... NOTE, PROGRAM HAS BEEN COMPILED INTO BINARY CODE, IF IT'S POSSIBLE TO RESTORE PROGRAM THAN ONE WOULD LOSE SOME INFO, GOOD THING ABOUT THE PROGRAM IS THAT IT HAS 2 FILES: CM.DAT WHERE PIECES INFO IS LOCATED AND CM.EXE CHESS ENGINE! TADEUSZNIDA@GMAIL.COM>

Jump to page #    (enter # from 1 to 38)
search thread:   
< Earlier Kibitzing  · PAGE 38 OF 38 ·  Later Kibitzing>

NOTE: Create an account today to post replies and access other powerful features which are available only to registered users. Becoming a member is free, anonymous, and takes less than 1 minute! If you already have a username, then simply login login under your username now to join the discussion.

Please observe our posting guidelines:

  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, duplicate, or gibberish posts.
  3. No vitriolic or systematic personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No cyberstalking or malicious posting of negative or private information (doxing/doxxing) of members.
  6. No trolling.
  7. The use of "sock puppet" accounts to circumvent disciplinary action taken by moderators, create a false impression of consensus or support, or stage conversations, is prohibited.

Please try to maintain a semblance of civility at all times.

Blow the Whistle

See something that violates our rules? Blow the whistle and inform a moderator.


NOTE: Please keep all discussion on-topic. This forum is for this specific player only. To discuss chess or this site in general, visit the Kibitzer's Café.

Messages posted by Chessgames members do not necessarily represent the views of Chessgames.com, its employees, or sponsors.
All moderator actions taken are ultimately at the sole discretion of the administration.

Spot an error? Please suggest your correction and help us eliminate database mistakes!
Home | About | Login | Logout | F.A.Q. | Profile | Preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | New Kibitzing | Chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | Privacy Notice | Contact Us

Copyright 2001-2021, Chessgames Services LLC