Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing
John Nunn vs John Anthony Sutton
Peterborough (1984), Peterborough ENG
French Defense: Steinitz. Boleslavsky Variation (C11)  ·  1-0



explore this opening
find similar games 1,694 more games of Nunn
sac: 28.Qxh7+ PGN: download | view | print Help: general | java-troubleshooting

TIP: If we are missing an important game, you can submit it (in PGN format) at our PGN Upload Utility.

PGN Viewer:  What is this?
For help with this chess viewer, please see the Olga Chess Viewer Quickstart Guide.


Kibitzer's Corner
< Earlier Kibitzing  · PAGE 4 OF 4 ·  Later Kibitzing>
Premium Chessgames Member
  agb2002: I overlooked 30... Bxg2+ 31.Kxg2 Rg8+ which wins for Black. Puzzle solving before coffee is not a good idea.
Jun-16-20  mel gibson: A little bit tricky because Black had threats.

Stockfish 11 says:

28. Qxh7+

(28. Qxh7+ (♕h3xh7+ ♔h8xh7 e5xf6 ♗c6xg2+ ♔h1xg2 ♕a5-d5+ ♖d3xd5 ♖f8-e8 ♖d1-d3 ♖e8xe7 ♖d3-h3+ ♔h7-g8 f6xe7 ♖a8-e8 ♖d5-d8 f7-f6 ♖h3-h8+ ♔g8-g7 ♖h8-h7+ ♔g7xh7 ♖d8xe8 g6-g5 ♖e8-f8 g5xf4 e7-e8♕ f4-f3+ ♔g2xf3 a6-a5 ♖f8-h8+ ♔h7-g7 ♕e8-g8+) +M16/72 74)

mate in 16.

Jun-16-20  saturn2: 28. Qxh7+ Kxh7 29. exf6 and mate to follow

unless ...Bxg2+ 30. Kxg2. Qd2 or Qd5. But then white has a knight more. To avoid mate black has to give another rook.

Jun-16-20  Brenin: My first reaction was that this was a Monday Q sac, a day late, before realising that g7 wasn't covered, and that exf6 needed to follow before the R could do its worst. It's interesting that the Stockfish analysis accompanying the game gives 25 Rfd3 a question mark, yet the alternative line suggested (Ra3) looks much weaker than the game continuation. A too-short search?
Jun-16-20  Nullifidian: 28. ♕xh7+ ♔xh7 29. ♙exf6 Δ ♖h3#. A good example of Anastasia's Mate. The only way black can prevent immediate mate is to start throwing pieces in white's way with 29... ♗xg2+ 30. ♔xg2 ♕d5+ when either the knight or the rook will have to capture on d5 and give the black king a much needed tempo. But the material disadvantage that will result is clearly winning for white.
Jun-16-20  malt: 28.Q:h7+

(28.ef6 h5 )

28...K:h7 29.ef6 B:g2+ 30.K:g2 Qd5+
31.R:d5 Kh6 32.R5d3 Kh5 33.Rh3+ Kg4 34.Nd5 g5 35.Rh7 gf4 36.Kf2 and 37.Rg1#

Jun-16-20  saturn2: 28 exf6 should still win 28...h5 (forced as said by agb2002) and now 29 Rg3
Premium Chessgames Member
  chrisowen: Whipoak revinja qh7jaff honkjav revinja kh7jaff honkjav revinja junks it finishoff 29.exf6 vestjav kontjav revinja 7 honkjav whipoak revinja jengafu its mindwin feltvir hoodpig revinja velihac gackcov bayzits winvija vestjaf feltrak hogspad revinja gackbox zittyin qh7jaff honkjav;

revinja vetjaff totaddy dontjav faceons dontjav ignoble jivijav whipoak revinja qh7jaff honkjav?

Premium Chessgames Member
  Caissalove: I'm annoyed with myself because I couldn't see it. Sure I saw 28. Qxh7+ but could only then think of 29.Rh3.+ which obviously doesn't work.
Jun-16-20  TheaN: On this Tuesday the answer is relatively easy, <28.Qxh7+ Kxh7 29.exf6> is forced and now the threat of Rh3# seems unpreventable, but an interesting side question is how to finish up after <29....Bxg2+ 30.Kxg2 (Kg1 Qb6+ 31.Kxg2 (Rd4 Qxd4+ -+) Qxf6 -+) Qd5+!>

click for larger view

This interesting defensive sequence forces White to deflect one of his pieces that are forming the mating net. I felt that the sequence <32.Rxd5 Kh6 33.R5d3 Kh5 34.Rh3+ Kg4> gave Black a bit too many options, but this is in fact #9 after <35.Nd5! #9>.

I opted for <32.Nxd5?!<>> still threatening Rh3+ with Ne7# and thought Black was forced to give up a rook on e7, but missed the relatively simple <32....g5 (Rae8 33.Rh3+ Kg8 34.Ne7+ Rxe7 35.fxe7 +-) 33.fxg5 (Ne7 g4) Kg6 34.h4 +8)<>> and even though White is winning, in hindsight forcing the Black king down the board is better.

I'd say both solutions count though, as solving a #12 after a winning combination is not Tuesday material.

Jun-16-20  goodevans: There's an annoying little quirk of the computer annotated scores on CG that they don't get updated when SF changes its evaluation of a position.

In today's computer annotation, SF awards Dr Nunn's <25.Rfd3> a <?>, saying he should have played 25.Ra3 instead <+1.85 (27 ply)>. It then evaluates black's response <25...O-O> as <+0.69 (25 ply)> so white's 'mistake' has cost him more than a pawn.

But wait. Ask SF to evaluate the position again and it changes it to <+7.52 (24 ply)> rendering it's own computer annotation a nonsense. Wouldn't it be great if mistakes like these could be reflected back into the annotation.

Premium Chessgames Member
  chrisowen: Hangmans noose no
Premium Chessgames Member
  gawain: The winning idea is nice. Black could struggle on for a bit, but he gracefully bows to the inevitable.
Jun-16-20  areknames: The solution is obvious but as a tournament player you still require ice in your veins to play 28.Qxh7+ and then, ever so quietly and calmly, 29.exf6.
Premium Chessgames Member
  scormus: <TheaN> Nice analysis! It's not very pretty for B, but it shows the puzzle isn't quite as simple as appears at first sight. A worthwhile exercise in looking a bit further than what seems obvious at first sight.
Jun-16-20  TheaN: <goodevans: There's an annoying little quirk of the computer annotated scores on CG that they don't get updated when SF changes its evaluation of a position.>

Interestingly, that's actually a limitation of the 'exhaustive method' that the common computers use.

If you evaluate move 1 on 20-ply, it will be able to sufficiently say something up till move 10, and by proxy any resulting positions. But not what <moves played> on move 11 and beyond. If a defense occurs on move 12, a computer at 20-ply will consider the attack to succeed, unless analysis starts three moves in or ply is increased to 24.

This is exactly what happens on some tactical combinations on CG, as the (suboptimal) SF9 analyses with 26ply in a whooping six seconds (which, as side note, almost certainly <never> allows it to analyze every arc in 26ply). So yes, looking further in the combination evaluations change. Annotations can be changed based on 'analysis down the line' but this is uncommon, as that would lead to a near infinite feedback loop of changed analysis.

The puzzle is almost the prime example. It analyses +8: <+8.37 (26 ply) 28.Qxh7+ Kxh7 29.exf6 Bxg2+ 30.Kxg2 Qd5+ 31.Nxd5 g5 32.fxg5 Kg6 33.h4>. Coincidentally the exact line I intended to play up to 31.Nxd5. In further analysis, <31.Rxd5 #12> is in fact mate in 12, but the 26ply does not see this on move 28, thus it goes by the more natural +8.

AIs are very interesting in that regard. AlphaZero and L0 will 'think' about the arc to come and reconsider options practically. That's why AIs use much less nodes as original engines as they 'easily' discard failing lines much faster. I'm no true IT expert though, so take my explanation with a grain of salt, but I believe it's along these lines at least.

Premium Chessgames Member
  agb2002: <saturn2: 28 exf6 should still win 28...h5 (forced as said by agb2002) and now 29 Rg3>

Yes. For example, 29... Rg8 30.Rxg6 fxg6 31.Nxg6+ Rxg6 32.Qxh5+ Kg8 33.Qxg6+ Kf8 34.Qg7+ Ke8 35.Qe7#.

Premium Chessgames Member
  AylerKupp: <A Quirk by any other name> (part 1 of 3)

<<goodevans> There's an annoying little quirk of the computer annotated scores on CG that they don't get updated when SF changes its evaluation of a position.>

If you were to mention this to a marketing guy his response would be "That's not a quirk! That's a feature!". I know, I've heard that first hand.

Seriously, you are seeing an example of the horizon effect, specifically a Type 2 horizon effect, as <TheaN> explained. A classic chess engine that uses iterative deepening and minimax to select the branch of its search will be able to evaluate the leaf nodes of a position only up to the depth of its search, beyond that it is blind. I once ran a chess engine analysis (how which I wish I had saved it!) that indicated at its final search depth that White had a very slightly better position. Even though Black had a mate in one on its next move.

That's pretty drastic but here is one that's a little less drastic and that I did save: Stockfish (Computer) (kibitz #115). In this case Komodo evaluated the position at very slightly better for White, [+0.16], even though Black had a forced mate in 5.

I've even egotistically described it as "AylerKupp's Corollary to Murphy's Law" (AKC2ML): "If you use your engine to analyze a position to a search depth=N, your opponent's killer move (the move that will refute your entire analysis) will be found at search depth=N+1, regardless of the value you choose for N." :-)

You are probably also seeing the effect of Stockfish pruning its search tree too aggressively. Stockfish typically reaches deeper search plies than other engines in the same amount of time, often much deeper. It does this by aggressively pruning its search tree using its heuristics and eliminating more branches from its search than other engines do.

But there is no such thing as a free lunch for classic chess engines, the more an engine prunes its search tree the more likely that it will miss searching the branches that contain good moves. As a result you must let Stockfish analyze deeper than you let other engines analyze in order to have similar confidence in its evaluations. I think that d=30 is the bare minimum, d=35 better, and preferably d>40.

As you saw, <> ran this Stockfish analysis to d=25. I looked at <>'s analysis of this game and it had 20 analyses, with a minimum search depth of 19 ply, a maximum search depth of 27 ply, and an average search depth of 23.3 ply.

Other chess sites like <chess24> that show live games also limit their Stockfish analysis to a search depth in the mid 20s. You will see many instances in which their analysis for White's move indicate an evaluation of, say, [+0.50] if Black plays its best move. Then, after Black plays that move, the evaluation of the position changes dramatically, by [1.00] or more. So I'm not surprised that after 25.Rfd3 <>'s Stockfish analysis evaluated the position at [+0.69] if Black played 25...0-0 and, after 25.0-0 <>'s Stockfish analysis evaluated the position at [+7.52]. Although this is much more extreme that what I usually see.

So, if you pardon the bad pun, I don't put much stock on Stockfish analyses at search depths in the mid 20 plies. They are simply not sufficiently reliable to have much confidence in their evaluations.

Premium Chessgames Member
  AylerKupp: <A Quirk by any other name> (part 2 of 3)

But at least <>'s analyses didn't experience another "feature" of multi-core engine analyses, their non-determinism. If you run several analyses of a position using a multi-core chess engines, on the same computer, and to the same search depth, you will get different evaluations and, possibly, different move rankings. Not <may>, <will>. Guaranteed.

I hadn't run a comparison of multi-core and single-core chess engine analyses for some time, so I decided to do so with the position after 25.Rfd3 and 25...0-0. I ran 3 analyses of the position after each move at a search depth d=30 using Stockfish 11, one each with threads = 1 (effectively making them single-core engine analyses) and one each with threads = 4. I ran them all on my fairly old 32-bit computer with 4 GB RAM running Windows XP. I exited my GUI (Arena 3.5) after each analysis and, after reloading it, I cleared its hash table for good measure. While the analyses were going on (they were fairly fast) I didn't run any other applications but, of course, I couldn't (or didn't bother) to do much with Windows XP processes or other applications and processes running in the background. But I think that most people run their analyses under these conditions.

The results were "interesting" and consistent with the results I had run many times before starting from different positions. When I used threads = 1 the results were exactly the same for the 3 analyses after 25.Rfd3, all the moves and the evaluations were the same for all the plies from d=8 through d=30. The results were <almost> exactly the same for the 3 analyses after 25...0-0, at d=17 the first analysis ranked 26.Nf6+ as the best move with an eval of [+1.58] while the second and third analyses ranked 26.e6 as the best move with an eval of also [+1.58]. It might have been that the evaluations were so close that a round-off error in analysis #1 made the ranking difference.

When I used threads = 4 the results were all over the place for the analyses after both 25.Rfd3 and 25...0-0. The evaluations were somewhat similar (none of them exactly the same) but other times wildly different. For example, at d=20 for the analysis after 25.Rfd3 analysis #1 ranked 25...0-0 as the best move (sounds familiar?) with an eval of [+0.77] and analysis #2 also ranked 25...0-0 as the best move but with an eval of [+1.75]. But analysis #3 ranked 25...h5 as the best move with an eval of [+5.53], after previously ranking 25...0-0 as the best move.

After 25...0-0 analysis #1 settled on 26.Ne7+ as the best move at d=18 with an eval of [+1.52] quickly rising to an eval of [+11.33] at d=30. Analysis #2 did not settle on 26.Ne7+ as the best move until d=21with an eval of [+6.33]; at d=20 Stockfish ranked various moves as best with all evals less than [+1.80]. So analysis #2 had a case of AKC2ML with N=20. Analysis #3 had a similar result, ranking 26.Ne7+ as the best move with an eval of [+3.49] and higher starting at d=19; prior to that the best evals were around [+2.20].

So, when running multi-core engine analysis which move is considered best by the engines and how that move is evaluated is highly dependent on when you stop the analysis and will vary from analysis to analysis. And if you want to get consistent results from analysis to analysis of the same position, use single-core engines or set threads = 1. It's almost (not quite) guaranteed.

If you or anyone else is interested in looking at the results in more detail I (including charts) you can download an Excel spreadsheet (version 2003 or later) from If you don't have Excel you can download *.pdf files for the analyses after 25.Rfd3 and 25...0-0 from and

Premium Chessgames Member
  AylerKupp: <A Quirk by any other name> (part 3 of 3)

I also had <> analyze the positions after 20.Rfd3 and 20...0-0 three times each. In each case I got identical results with regards to the best move, eval, and line at d=25. This would seem to indicate that <> conducts their Stockfish analysis using threads=1 but I thought that was strange. The pop-up indicated that the analyses were run for 6 secs but I got the results within 2 or 3 secs for the positions after 20.Rfd3 and 20...0-0. So I suspect that the analyses might be cached and if you request an analysis from a position that's been analyzed before (at least until the results are pushed out of the cache) you'll get the results from the cache since that takes less time than running the analysis again. No wonder the results were consistent from analysis to analysis!

This is what happened when I requested a new analysis of the game. I got an error message indicating that the game had been analyzed before, so the analysis was either currently saved in a cache or it was permanently saved in a database.

Premium Chessgames Member
  AylerKupp: <<TheaN> AIs are very interesting in that regard. AlphaZero and L0 will 'think' about the arc to come and reconsider options practically. That's why AIs use much less nodes as original engines as they 'easily' discard failing lines much faster.>

I don't think that's the case. Pretty much all classic chess engines, certainly the top ones, use a combination of alpha-beta and search tree pruning heuristics to "practically' consider which branches of its search tree it will keep and which ones it will discard prior to evaluating the leaf nodes of each branch using the minimax algorithm. As I mentioned to <goodevans> above, Stockfish apparently prunes its search tree much more aggressively that other engines, and that's why it can reach greater search depths than other engines in a similar amount of time. But, as a result, it needs to search more deeply before you can have reasonable confidence in its evaluations and move rankings.

So it's a tradeoff, aggressive search tree pruning and greater search depth vs. more conservative search tree pruning and shallower search depths. Currently Stockfish's approach has a slight edge since it has a higher rating than other classic chess engines in pretty much all the engine vs. engine chess tournaments.

AlphaZero, LeelaC0 and other NN-based chess engines (and, optionally, Komodo) use a Monte Carlo Tree Search (MCTS) algorithm instead of minimax to investigate their search trees. And, instead of having a hand-crafted evaluation function to evaluate each position, they (except apparently AlphaZero) conduct a series of simulated games (playouts) and determine the scoring % for each candidate move investigated at a given ply. Then a given number of moves (I don't know how this is determined, I suspect that each engine using MCTS has a different criteria) are kept and the others discarded.

The reason that AlphaZero, LeelaC0, and other engines using MCTS (including Komodo where you can select either classic minimax or MCTS) evaluate many less nodes than classic chess engines using minimax is that conducting simulated game playouts (except for AlphaZero which, according to what has been disclosed, doesn't do simulated game playouts but gets the scoring % information from its neural net. Whatever that means.) takes a lot more time than executing an evaluation function. Thus AlphaZero, even with much more computational hardware capability than Stockfish (I estimated it at 80X more in the system configurations used in their matches) does not evaluate nearly as many nodes as Stockfish. It simply didn't have enough horsepower.

I think it's funny that DeepMind used this a way of touting how much more "efficient" AlphaZero is than Stockfish since it can get better scoring results while evaluating a much smaller number of nodes. Of course, it all depends on how you define "efficiency". If you do it as DeepMind does, then it's clearly more "efficient". But if you define it as to how "efficiently" it can evaluate positions, then Stockfish is much more "efficient". So, as marketeers are fond of saying, "If life gives you a lemon, make lemonade".

Komodo provides <one> method of comparing "efficiency". It currently comes in two versions, a default classic one using hand-crafted evaluations, alpha-beta and search tree pruning heuristics, and minimax to evaluate nodes and select the line containing best play by both sides (in the minimax sense) and an optional version which uses simulated game playouts as its means of determining scoring % and MCTS to determine the best move and prune its search tree. Tonight I'll make some single-threaded analysis runs (my computer only has 4 cores) of the positions after 25.Rdf3 and 25...0-0 and report on how "efficient" they both were and how their results compared with Stockfish's.

Jun-17-20  Brenin: Many thanks to <AylerKupp> and <TheaN> for their responses. When I questioned the question mark Stockfish had awarded to 25 Rfd3 I didn't expect such thoroughness and expertise. As someone who used to teach the theory of algorithms, it was interesting to see how unreliable and inconsistent the output(s) can be in this case.
Premium Chessgames Member
  perfidious: And Black's huddled masses on the queenside are all mere lookers-on, his king bereft of succour on the opposite wing.
Premium Chessgames Member
  AylerKupp: <TheaN> Unfortunately my run last night crashed because a bad parameter that I had for my Komodo MCTS analysis. I'm swamped at the moment but I will redo them as soon as I can.
Premium Chessgames Member
  AylerKupp: <TheaN> Phew! It took me much more time than I thought because I ran into some issues. But I learned a lot in the process in case you and others are interested.

1. As I mentioned above, my initial attempt at running Komodo 12.2 MCTS failed. I was running 2 instances of Komodo 12.2 MMax (i.e. standard version using MiniMax) and 2 instances of Komodo MCTS, each with threads=1 and Hash Table=256. When analyzing a single game I usually run an engine using threads=4 and Hash Table = 1024.. But only one instance of Komodo MCTS crashed so I figured I was running out of RAM (my old 32-bit computer can only support 4 GB RAM of which only about 3.2 GB are usable).

2. On my second attempt I ran only 1 instance each of Komodo MMax and Komodo MCTS for the position after 25.Rfd3. This time Komodo MCTS stopped at d=26 even though I let it run all night for a total of about 7.5 hrs. And it "only" took it about 1.5 hrs to reach d=25. Similarly for the position after 25...0-0 Komodo MCTS only reached d=23 and then stopped.

I read more about Komodo MCTS (I don't use it much) and it turns out that it has a separate hash table, MCTS Hash, which, when full, Komodo MCTS simply stops. I had been running it with MCTS Hash = 128 (MB) so I gradually increased the size of the MCTS Hash table to 384 MB and 512 MB and repeated the analysis. But, even though the MCTS Hash table was now 4X larger than initially and I let Komodo MCTS run all night, I couldn't get it to get beyond d=26 and d=23 for the analysis of the positions after 25.Rfd3 and 25...0-0 so I gave up. I'll have to wait until I get my new computer with lots and lots of RAM.

3. But the Komodo MCTS evals looked strange, more than 2X greater than the evals of Komodo MMax. I don't know if you know but since Komodo MCTS does not evaluate positions in equivalent pawns as classic engine with a handcrafted evaluation function do, it "estimates" what an equivalent expected score in the range [ -1, 1 ] would yield an evaluation in equivalent pawns in the range [-128, +128]. And I didn't know how it did that.

But a technical description of LeelaC0 was published in early 2019 describing how LeelaC0 does this conversion, and it involves a funky function using "magic" constants and tangents. I suspect that Komodo MCTS does something similar but I'm not really sure how similar.

At any rate the initial LeelaC0 conversion formula also gives high evaluation values and it was updated early this year. The updated conversion formula gives values that are on the average about 40% of the original values. So I thought I would created a lookup table with a 0.01 resolution in expected scores to calculate a correction factor based on the ratio between LeelaC0's original and updated conversion formula and apply these corrections factors to the Komodo MCTS evaluations.

Fine in theory. I didn't think that a curve of the correction factors would form a straight line but I thought that at least the curve would be a smooth function. Fat chance. The curve of the correction factors looks like a closing bracket ( "}" ) lying on its side, and it is not smooth or monotonic. At least it was symmetrical so I took advantage of that to reduce the size of the table by a factor of 2. However, the non-monotonicity of the table created some hiccups when using Excel's table lookup functions but I figured out a way around that.

The results seem promising. Komodo MCTS evaluations are now much closer to the Komodo MMax evaluations, by a factor of about 2.5X on the average for the analyses of the positions after 25.Rfd3 and 25...0-0. Of course, I have no idea how accurate these evaluations are in the absolute sense.

Should you or others wish to see the comparisons you can download the Excel spreadsheet from here:, or *.pdf versions of the analyses from and

Jump to page #    (enter # from 1 to 4)
search thread:   
< Earlier Kibitzing  · PAGE 4 OF 4 ·  Later Kibitzing>

NOTE: Create an account today to post replies and access other powerful features which are available only to registered users. Becoming a member is free, anonymous, and takes less than 1 minute! If you already have a username, then simply login login under your username now to join the discussion.

Please observe our posting guidelines:

  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, duplicate, or gibberish posts.
  3. No vitriolic or systematic personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No cyberstalking or malicious posting of negative or private information (doxing/doxxing) of members.
  6. No trolling.
  7. The use of "sock puppet" accounts to circumvent disciplinary action taken by moderators, create a false impression of consensus or support, or stage conversations, is prohibited.
  8. Do not degrade Chessgames or any of it's staff/volunteers.

Please try to maintain a semblance of civility at all times.

Blow the Whistle

See something that violates our rules? Blow the whistle and inform a moderator.

NOTE: Please keep all discussion on-topic. This forum is for this specific game only. To discuss chess or this site in general, visit the Kibitzer's Café.

Messages posted by Chessgames members do not necessarily represent the views of, its employees, or sponsors.
All moderator actions taken are ultimately at the sole discretion of the administration.

This game is type: CLASSICAL. Please report incorrect or missing information by submitting a correction slip to help us improve the quality of our content.

<This page contains Editor Notes. Click here to read them.>

Featured in the Following Game Collections[what is this?]
snyggt damoffer!
from xfer's favorite games 2006 by xfer
Waiting Moves
by patzer2
by Morales
raimondi's favorite games
by raimondi
Winawer (Nunn) Q sack on h-file
from lampton's favorite games by lampton
Move 28 White to play
from's most interesting chess puzzles by ahmadov
from Rook Lifts by chessic eric
French Caro
by regi sidal
Instructive Tactical Finishes
by Easy Point
27 +8.0 Bf6 0.0 Nxc6
from Game collection: TIM by mughug
28.? (Wednesday, August 31)
from Puzzle of the Day 2005 by Phony Benoni
CLUB Line (white): French
by lomez
28.? (August 31, 2005)
from Wednesday Puzzles, 2004-2010 by Phony Benoni
by obrit
French Def. Steinitz. Boleslavsky (C11)1-0 Q sac, Anastasia's #
from PM Joins the Under 30 Crowd@Fredthebear's Place by fredthebear
Steinitz. Boleslavsky Variation
from MKD's French Defense by MKD
French Def. Steinitz. Boleslavsky (C11)1-0 Q sac, Anastasia's #
from N O P Players Bac by fredthebear

Home | About | Login | Logout | F.A.Q. | Profile | Preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | New Kibitzing | Chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | Privacy Notice | Contact Us

Copyright 2001-2023, Chessgames Services LLC