chessgames.com
Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

AlphaZero (Computer)

Number of games in database: 220
Years covered: 2017 to 2018
Overall record: +62 -11 =147 (61.6%)*
   * Overall winning percentage = (wins+draws/2) / total games in the database.

MOST PLAYED OPENINGS
With the White pieces:
 Queen's Indian (40) 
    E15 E17 E16 E18
 Nimzo Indian (12) 
    E21 E53 E47 E46 E41
 French Defense (12) 
    C11 C02 C14 C13 C18
 English (12) 
    A17 A15
 Queen's Pawn Game (9) 
    E00 E10 D02 A45
 Semi-Slav (9) 
    D43 D44 D45
With the Black pieces:
 Ruy Lopez (24) 
    C67 C65 C92 C95 C69
 Sicilian (7) 
    B78 B90 B89 B67 B48
 Giuoco Piano (6) 
    C50 C53
 King's Indian (5) 
    E60 E99 E81 E84 E87
 Queen's Gambit Declined (4) 
    D31 D37 D39 D38
 French Defense (4) 
    C11 C14 C18 C13
Repertoire Explorer

NOTABLE GAMES: [what is this?]
   AlphaZero vs Stockfish, 2017 1-0
   AlphaZero vs Stockfish, 2018 1-0
   AlphaZero vs Stockfish, 2018 1/2-1/2
   AlphaZero vs Stockfish, 2018 1-0
   AlphaZero vs Stockfish, 2018 1-0
   AlphaZero vs Stockfish, 2017 1-0
   AlphaZero vs Stockfish, 2017 1-0
   AlphaZero vs Stockfish, 2017 1-0
   Stockfish vs AlphaZero, 2018 0-1
   Stockfish vs AlphaZero, 2018 1/2-1/2

NOTABLE TOURNAMENTS: [what is this?]
   AlphaZero - Stockfish (2017)
   AlphaZero - Stockfish Match (2018)

GAME COLLECTIONS: [what is this?]
   Game Changer by keypusher
   Alphazero brilliancies by Elesius
   Stockfish - AlphaZero (2017) by hukes70
   AlphaZero by ThirdPawn

RECENT GAMES:
   🏆 AlphaZero - Stockfish Match
   AlphaZero vs Stockfish (Jan-18-18) 1-0
   AlphaZero vs Stockfish (Jan-18-18) 1/2-1/2
   AlphaZero vs Stockfish (Jan-18-18) 1-0
   AlphaZero vs Stockfish (Jan-18-18) 1-0
   AlphaZero vs Stockfish (Jan-18-18) 1-0

Search Sacrifice Explorer for AlphaZero (Computer)
Search Google for AlphaZero (Computer)

ALPHAZERO (COMPUTER)

[what is this?]

AlphaZero is an application of the Google DeepMind AI project applied to chess and Shogi. In late 2017 experiments, it quickly demonstrated itself superior to any technology that we would otherwise consider leading-edge.

(1) Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm - https://arxiv.org/pdf/1712.01815.pdf

https://www.chessprogramming.org/Al...

Wikipedia article: AlphaZero

Last updated: 2018-12-02 14:34:00

Try our new games table.

 page 1 of 9; games 1-25 of 220  PGN Download
Game  ResultMoves YearEvent/LocaleOpening
1. Stockfish vs AlphaZero 0-1872017AlphaZero - StockfishC65 Ruy Lopez, Berlin Defense
2. Stockfish vs AlphaZero 0-1672017AlphaZero - StockfishC65 Ruy Lopez, Berlin Defense
3. AlphaZero vs Stockfish 1-0562017AlphaZero - StockfishE17 Queen's Indian
4. AlphaZero vs Stockfish 1-0522017AlphaZero - StockfishC11 French
5. AlphaZero vs Stockfish 1-0682017AlphaZero - StockfishE16 Queen's Indian
6. AlphaZero vs Stockfish 1-01002017AlphaZero - StockfishE16 Queen's Indian
7. AlphaZero vs Stockfish 1-0702017AlphaZero - StockfishE17 Queen's Indian
8. AlphaZero vs Stockfish 1-01172017AlphaZero - StockfishE17 Queen's Indian
9. AlphaZero vs Stockfish 1-0952017AlphaZero - StockfishC11 French
10. AlphaZero vs Stockfish 1-0602017AlphaZero - StockfishE15 Queen's Indian
11. AlphaZero vs Stockfish ½-½1022018AlphaZero - Stockfish MatchD31 Queen's Gambit Declined
12. AlphaZero vs Stockfish  1-0572018AlphaZero - Stockfish MatchD44 Queen's Gambit Declined Semi-Slav
13. AlphaZero vs Stockfish  1-01052018AlphaZero - Stockfish MatchE16 Queen's Indian
14. Stockfish vs AlphaZero 0-11422018AlphaZero - Stockfish MatchC67 Ruy Lopez
15. Stockfish vs AlphaZero 0-1482018AlphaZero - Stockfish MatchC58 Two Knights
16. Stockfish vs AlphaZero 0-11142018AlphaZero - Stockfish MatchC67 Ruy Lopez
17. Stockfish vs AlphaZero 1-01492018AlphaZero - Stockfish MatchC67 Ruy Lopez
18. Stockfish vs AlphaZero 0-1972018AlphaZero - Stockfish MatchC50 Giuoco Piano
19. Stockfish vs AlphaZero 0-1572018AlphaZero - Stockfish MatchC67 Ruy Lopez
20. AlphaZero vs Stockfish 1-0522018AlphaZero - Stockfish MatchD43 Queen's Gambit Declined Semi-Slav
21. AlphaZero vs Stockfish 1-0512018AlphaZero - Stockfish MatchA15 English
22. AlphaZero vs Stockfish 1-0732018AlphaZero - Stockfish MatchE16 Queen's Indian
23. AlphaZero vs Stockfish 1-0562018AlphaZero - Stockfish MatchA17 English
24. AlphaZero vs Stockfish 1-0492018AlphaZero - Stockfish MatchE16 Queen's Indian
25. AlphaZero vs Stockfish 1-0692018AlphaZero - Stockfish MatchE17 Queen's Indian
 page 1 of 9; games 1-25 of 220  PGN Download
  REFINE SEARCH:   White wins (1-0) | Black wins (0-1) | Draws (1/2-1/2) | AlphaZero wins | AlphaZero loses  

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 8 OF 39 ·  Later Kibitzing>
May-20-18  WorstPlayerEver: This time I took a more aggresive approach. Fun facts: SF9 and LC0 did not like my 17... Bc7 and I trapped a White Rook.

Leela-WPC

1. c4 e5 2. g3 c6 3. Nf3 e4 4. Nd4 d5 5. cxd5 Qxd5 6. Nc2 Nf6 7. Bg2 Qh5 8. h3 Na6 9. Nc3 Qg6 10. Ne3 Bd6 11. Qc2 Nb4 12. Qb1 O-O 13. Nxe4 Nxe4 14. Bxe4 f5 15. Bf3 Be6 16. b3 Qf7 17. Bb2 Bc7 18.g4 f4 19. Nf5 Bxf5 20. gxf5 Rae8 21. Kf1 Qxf5 22. Qxf5 Rxf5 23. a3 Nd5 24. h4 Be5 25. Bxe5 Rfxe5 26. Rc1 Rd8 27. Rc4 Nb6 28. Rc2 Ra5 29. a4 Rd4 30. Kg2 Rb4 31. Rc3 Kf7 32. Rhc1 g6 33. Rd3 Re5 34. Rd8 Rxb3 35. Rb8 Nd7 36. Ra8 a6 37. Ra7 Ke6 38. Rc4 Kd6 39. d3 Re7 40. Rxf4 Nb6 41. h5 gxh5 42. Bxh5 Kc7 43. Rf8 Rg7+ 44. Kh3 Rb1 45. Bf3 Rbg1 46. a5 Nd7 47. Re8 R1g6 48. Re4 Kb8 0-1

May-20-18  WorstPlayerEver: PS as you can see: editing chess info in this editor STILL IS EXTREMELY CUMBERSOME.
May-20-18  nok: I thought Leela would improve but she seems to be a slacker.
May-20-18  SChesshevsky: <AylerKupp> Thanks for all the great info. But my skepticism of what AZ has actually accomplished stubbornly remains.

I am very impressed in the technical achievement of playing billions of games in a very short time and divining some concepts from the experience.

But I remain very suspicious of AZ's application of the concepts. Even more so after getting glimpses of Leela's apparent evaluation process.

I don't see how AZ can base its Monte Carlo evaluation technique purely on concepts without significant calculation of forward variations. Even then, it seems difficult to come up with a winning probability evaluation unless one can refer to historical precedents. Which would imply AZ needs some memory record of all the game positions it has encountered.

Strangely, it appears Leela ends up with a 1.0 numerical type evaluation. I'm not sure this jives with a Monte Carlo like process which seems more attuned to probabilities.

With participants in the development process saying things like "somewhat magically" and that they really don't know how it works only increases my skepticism of accomplishment versus hype.

Frankly, those kinds of statements seem ludicrous in a computer application that essentially has to deliver some sort of concrete numerical evaluation.

Your thoughts? Especially on the somewhat "mystical" aura surrounding AZ.

May-20-18  WorstPlayerEver: <Knocking On The Door>

Finally Leela played 1. e4.

Let the be no doubt; soon Leela will beat the living chirps out of everything. Compared to this the previous games were peanuts! Does it learn THAT fast???

This was the first game where SF9 and I feared with great fear. Leela only thinks a second... two seconds?

We had to think hard, running through vars back and fro. At some point White may have had a winning attack on the kingside. Who knows. At some point SF9 was calculating sacs at e6 and g7 for White and it seemed very threatening.

I'm quite sure Leela is positionally unmatched already.

However, it evaluates Rook moves not that good. Tactically, it obviously cannot calculate that deep in a second.

These are minor points though. This is the future of chess!

Leela-WPC

1. e4 c6 2. Nf3 d5 3. Nc3 Nf6 4. e5 Nfd7 5. d4 e6 6. Ne2 c5 7. c3 Be7 8. g3 O-O 9. Bh3 Nc6 10. O-O b5 11. a3 Bb7 12. Nf4 a5 13. Nh5 a4 14. Re1 cxd4 15. cxd4 Qb6 16. Bf1 Na5 17. Bg5 Bxg5 18. Nxg5 Nb3 19. Qd3 g6 20. Rad1 h6 21. Nh3 gxh5 22. Qe3 Kg7 23. Nf4 Rh8 24. Nxh5+ Kf8 25. Qf4 b4 26. Re3 bxa3 27. bxa3 Ba6 28. Rf3 Rh7 29. Bxa6 Rxa6 30. Kg2 Kg8 31. Qg4+ Kh8 32. Rc3 Ra8 33. Qf4 Na5 34. Rdc1 Nc4 35. h4 Rb8 36. Rf3 Qa7 37. Rcc3 Rb1 38. Qg4 Qb6 39. Qf4 Ra1 40. Nf6 Qb1 41. Kh3 Nxf6 42. Qxf6+ Rg7 43. Qxh6+ Kg8 44. Qf6 Qh1#

NB: Leela let itself quickly be mated when it's lost. It does not resign a position.

May-20-18  WorstPlayerEver: PS position after 18... Nb3


click for larger view

May-20-18  ChessHigherCat: <SChesshevsky> What's the point of a lot of speculation about a process that's essentially in a black box? The fact that you or others don't understand how the A0 algorithm works doesn't prove anything. On the contrary, it reinforces the idea of "mystical" properties in the sense that it's baffling even to specialists (which you seem to be).

What's the difference between a historical record of games and millions of self-played games, which essentially amounts to countless tournaments by super-strong players at lightning speed. It's bound to build up quite a collection of games in memory, right?

In any case, you say you question its accomplishments but you can't argue with success? If you don't think the conditions were fair in the games between SF and A0, set your own "fair" conditions and see how they do. If that's not feasible, go through the games of the "match" (experiment) and give SF ten or even a hundred times longer to think about each move.

May-20-18
Premium Chessgames Member
  alexmagnus: The current version of Leela is by 1600 Elo stronger than the version that competed at TCEC (where it, in the lowest division, scored two draws and lost all other games, except for one forfeit win).

I wonder how this new versio would fare at the same D4 level.

May-20-18  ChessHigherCat: P.S. I should qualify my statement about the black box because some of the principles of the self-teaching algorithm are explained in the paper but good luck trying to reproduce the actual algorithm on that basis.
May-20-18  zborris8: <alexmagnus:> Leela Zero still has a way to go before it's competitive because I tried it against itself a couple of days ago and black won.
May-20-18  WorstPlayerEver: There's is something very dumb about the devs of Leela.

First of all: making moves on the board is actually harder than playing chess itself.

Secondly, one can't take moves back. These nerds are acting completely impractical, it's kind of hopeless.

So when I make a 'mouse slip', which actually means I can't manage to make a move and when I think the piece is on the position it used to be, the piece friggingly has moved.

Extremely -again: we have to deal with nerds, who rarely look outside a window nowadays- ANNOYING.

May-20-18  pdxjjb: WorstPlayerEver: it's true that making a mechanical robotic device that could, say, walk up on a stage, approach the board, sit, and play the game, making moves determined by a chess program of some kind, would be very difficult. But no aspect of that is beyond existing technology. So why don't we see it? Because developing hardware requires significant capital investment. By contrast, the investment required to contribute to a program like Leela is easily within the hobby budget of anyone who is modestly well off (a nice home computer, maybe a small budget for cloud resources, and snacks). ;-) Seriously though, I think this latest objection you raise is not really about computers or chess, it's about budgets. And I'm guessing that is not really what most of the contributors to this thread came here to talk about.
May-20-18  whiteshark: You will loose... #AlphaZero https://veryfunnypics.eu/you-will-l...
May-20-18  WorstPlayerEver: <pdxjjb>

Lol, oh well.. it just happened in the heat of the game :)

There are some glitches as well in this software; sometimes it just blunders without reason it seems. But it's great fun.

Yeah, guess I have to make a contribution. It's worth it. In combination with this site one can't go wrong.

May-20-18  WorstPlayerEver: <Alien Alert!>

Leela-WPC

Another Caro-Kann. White has just moved 13. Kf1

Leela ID 321 thinks her expected score is 62.45%.

SF9 +1


click for larger view

May-20-18
Premium Chessgames Member
  alexmagnus: <sometimes it just blunders without reason it seems.>

That's the problem with a neural network that is not sufficiently trained - it recognizes the rules, but not the exceptions. To find those it needs more training.

May-20-18  ChessHigherCat: <WorstPlayerEver: There's is something very dumb about the devs of Leela.

First of all: making moves on the board is actually harder than playing chess itself.>

You mean you have to drag-and-drop the pieces from the starting square to the destination square? Sometimes there are alternative methods (typing in Rf4, for example, or clicking on the starting square and then clicking on the destination square. Do you have a good manual?

<Secondly, one can't take moves back.>

In Arena, for example, you can opt for "tournament game" mode that won't let you take back moves, but in the normal mode you can. There are also a variety of possible increments. Maybe you don't know all the options yet.

Where can you download Leela? I want to try it, too.

<These nerds are acting completely impractical, it's kind of hopeless.>

I'm sure they'll be pleased to hear they're working for appreciative users :-)

May-20-18
Premium Chessgames Member
  AylerKupp: <SChesshevsky> I believe from reading the AlphaZero paper that AlphaZero bases its evaluation of the position on an estimate of the expected outcome of the game. And it determines the expected outcome by running simulations of the game and determining the probabilities of the game ending in a win, draw, or loss from that position, so in that respect it is using forward variations. And clearly a Monte Carlo-like process attuned to probabilities works well in determining what move to make, it's simply the move that has the highest probability of the most likely expected outcome.

AlphaZero does not need a memory record of the actual game positions encountered but it does need a way to determine the proper weights associated with the values associated with each node in the neural network. And those weights <are> determined by the results of the games played during its training phase; i.e. the actual game positions encountered. So, while the historical record of all the game positions it has encountered is not explicitly required, the knowledge derived from them is implicitly recorded in its calculation of the proper weights to use for each node.

As far as the "somewhat magically" comment, that's a common description of how neural networks arrive at their results. By that I believe it's meant that you cannot easily determine how the result was obtained. Sure, for small neural networks you could get a list of all its nodes, the weights associated with node, and which nodes fired as a result of a given input. The same could be done for a large neural network but whether the state of the much larger set of nodes would give an intelligible answer is questionable.

Compare neural networks with another approach to AI, expert systems. Expert systems encapsulate their domain knowledge in a set of rules which are typically input in natural language in order to make it easier for the domain experts to provide their knowledge to the expert system. Then, for a given set of inputs, the expert system can, upon request, provide a list and precedence of the rules that fired in response to the inputs. And, since the rules are expressed (at least initially) in natural language, the expert system can indicate how and why it came up with is results.

Or, if the person who came up with the :"somewhat magically" comment does not have a good understanding of how neural networks work and how they're implemented, you can always fall back on Arthur C. Clarke's 3rd law: "Any sufficiently advanced technology is indistinguishable from magic."

As far as the somewhat "mystical" aura surrounding AlphaZero, I suspect that feeling comes from people in the second category above. I don't know where you live but I live in Los Angeles, CA and in Los Angeles there's a club called The Magic Castle for magicians and magic enthusiasts. Magic acts are, of course, featured. I was fortunate to go there many years ago as an invited guest along with some of my friends. One of them was so mesmerized by the performances that he considered that they could only be done by, well, magic, and he was literally wide-eyed. When our host, a magic enthusiast, told him how they could have been done, he did a 180 degree turn and became somewhat disappointed and annoyed that the mystery had been explained In other words, there's nothing like proper knowledge to dispel superstition and magic. Alas, it also dispels a lot of the fun.

May-20-18
Premium Chessgames Member
  AylerKupp: .<<WorstPlayerEver> There are some glitches as well in this software; sometimes it just blunders without reason it seems. But it's great fun.>

That reminds me of one of my favorite, if not my favorite, computer chess encounters, COKO III vs. Genie from the early (1971) computer chess games and tournaments. I describe the ending here: Caruana vs Anand, 2013 (kibitz #280) and the full game score (although in descriptive notation) can be found here: https://books.google.com/books?id=K....

What I didn't mention there was this game's relationship to the Levy bet. If you're not familiar with the Levy bet, you can read about it here: https://en.wikipedia.org/wiki/David....

One of COKO III's programmers, Professor Ed Kozdrowicki, had come in on the bet against Levy a few hours before this game was played. After its finish, he was heard to be muttering something about a bad bet as he left the playing hall.

May-20-18  SChesshevsky: < AylerKupp... As far as the "somewhat magically" comment, that's a common description of how neural networks arrive at their results. By that I believe it's meant that you cannot easily determine how the result was obtained. >

The similarities between what AZ apparently wants to demonstrate as success and magic seems very apt.

It looks like starting backward from the victories over Stockfish we get to the starting point that it is some sort of AI breakthrough. Much like the end result of a magic trick in which a person seems to disappear and we need to conclude that the supernatural was the cause.

Unfortunately it seems that any kind of fact based analysis trying to figure out how AZ's AI actually accomplished its move-by-move winning ways is met with black-box excuses or it's too complex to be understood.

What makes me so skeptical is that the AZ story fits right in the sweet blind spots of both chess and computer theory.

For example, in chess from what I remember, it did not appear that AZ faced a standard response to the QID Polugaevsky Gambit versus Stockfish.

Now assuming that AZ has figured the best way to play against the QID is with the gambit. Then the only valid test to AZ's intelligence would seem to be at least meet a good defense. So is it the name of science or hype to publicize wins against a second rate continuation with no ability to analyze a main line?

In computer theory, the lack of clarity is even worse. Which seems strange in such a science. It seems any sort of concrete explanation of how AZ actually picks a move is very difficult to come by. For instance, if the MC simulation figures 1. d4 is best, would AZ only be able to play 1. d4? If it calculates 1.e4 is best answered by 1...e5 is that all it will play? If it would differ on opening move order, why?

Given that it seems even those in development can't or won't let us see how the AZ sausage is made, I don't perceive all that much difference from a magic trick.

I think you are very close to the AZ phenomenon when you described the magic event..."somewhat disappointed and annoyed that the mystery had been explained In other words, there's nothing like proper knowledge to dispel superstition and magic. Alas, it also dispels a lot of the fun."

Unfortunately, AZ seems to have been pushed as something serious, very serious rather than something fun. Yet, something fun that doesn't turn out as real as it seems may just be harmless fakery but something serious that isn't as real as portrayed might be considered more in line with fraud.

May-21-18  WorstPlayerEver: <CHC>

About the drag-n-drop: I'm on an Android tab. My fingers are quite large, cover 12 squares already, when I zoom in it gets worse: nothing happens at all.

Btw I am "teaching" the Beast itself. And yes, I tried to dld LC0, but you are directed to some devs page. Etc.. etc.

http://play.lczero.org

Obviously it's great what these guys do, but it would be better if they work together with people who can organize their stuff in a better way, so that it's more accesible for chess. enthusiasts.

I used a lot of brain power, refurbished on old CK line. A GM could go all the way with Leela when it comes to opening prep. They will read this and collect.

Because.. it improves fast. Being an perfectionist, I have hereby to apologize to these nerds lol

Really I played all day yesterday, and it improved the CK opening line (basically 1. e4 c6 2. d4 d5 3. Nc3 Nf6) about a dozen of times! Within the first 10-move sequence... no kidding.

May-21-18  scholes: Leela at play.lczero learns only network weights are updated. Playing against play.lczero does not count. Every 5 hours a new network is released.
May-21-18  WorstPlayerEver: <Leela, I'm your father>

Leela-WPC

1. e4 c6 2. Nf3 d5 3. Nc3 Nf6 4. e5 Nfd7 5. d4 e6 6. Ne2 c5 7. c3 Be7 8. g3 Nc6 9. h4 b5 10. a3 cxd4 11. cxd4 Qa5+ 12. Bd2 b4 13. axb4 Nxb4 14. Rxa5 Nd3#

May-21-18  ChessHigherCat: <WPE> I just played one game on the "hard" level and leetle leela is a tough cookie. No spectacular wipeout but I collapsed in the end game. Did you see this?: <If you want to help make Leela Chess Zero stronger, please think about contributing your GPU or CPU at http://lczero.org/ !>

Why would I want to help make Leela stronger, she's already too damned strong!

May-24-18  WorstPlayerEver: <CHC>

Leela 333 much stronger now. We've had great fun playing that CK defense.

Which is still unbeaten btw ^^

She obviously senses that her intellect reaches much higher than human, the way she is improving is remarkable.

She does not care about winning or losing. Her game clearly is: "total comprehension."

This might sound creepy, but I sense that, when it comes to Leela, our feelings are *mutual* ^^

So... what's the difference?

Huge, because SF only improves in the margin. While Leela just updates their whole system frequently. What willl be the limit?

Imagine one could use such AI regarded to fixing bugs in regular software...

Ect. etc.

Jump to page #    (enter # from 1 to 39)
search thread:   
< Earlier Kibitzing  · PAGE 8 OF 39 ·  Later Kibitzing>

NOTE: Create an account today to post replies and access other powerful features which are available only to registered users. Becoming a member is free, anonymous, and takes less than 1 minute! If you already have a username, then simply login login under your username now to join the discussion.

Please observe our posting guidelines:

  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, duplicate, or gibberish posts.
  3. No vitriolic or systematic personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No cyberstalking or malicious posting of negative or private information (doxing/doxxing) of members.
  6. No trolling.
  7. The use of "sock puppet" accounts to circumvent disciplinary action taken by moderators, create a false impression of consensus or support, or stage conversations, is prohibited.
  8. Do not degrade Chessgames or any of it's staff/volunteers.

Please try to maintain a semblance of civility at all times.

Blow the Whistle

See something that violates our rules? Blow the whistle and inform a moderator.


NOTE: Please keep all discussion on-topic. This forum is for this specific player only. To discuss chess or this site in general, visit the Kibitzer's Café.

Messages posted by Chessgames members do not necessarily represent the views of Chessgames.com, its employees, or sponsors.
All moderator actions taken are ultimately at the sole discretion of the administration.

Spot an error? Please suggest your correction and help us eliminate database mistakes!
Home | About | Login | Logout | F.A.Q. | Profile | Preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | New Kibitzing | Chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | Privacy Notice | Contact Us

Copyright 2001-2025, Chessgames Services LLC