chessgames.com
Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

🏆 FIDE Grand Prix Zug (2013)

  PARTICIPANTS (sorted by highest achieved rating; click on name to see player's games)
Fabiano Caruana, Shakhriyar Mamedyarov, Veselin Topalov, Hikaru Nakamura, Anish Giri, Teimour Radjabov, Sergey Karjakin, Alexander Morozevich, Ruslan Ponomariov, Peter Leko, Gata Kamsky, Rustam Kasimdzhanov

 page 1 of 3; games 1-25 of 66  PGN Download
Game  ResultMoves YearEvent/LocaleOpening
1. Mamedyarov vs Ponomariov ½-½712013FIDE Grand Prix ZugD37 Queen's Gambit Declined
2. Giri vs Topalov  ½-½362013FIDE Grand Prix ZugD76 Neo-Grunfeld, 6.cd Nxd5, 7.O-O Nb6
3. Leko vs Kamsky ½-½372013FIDE Grand Prix ZugC80 Ruy Lopez, Open
4. Caruana vs Radjabov 1-0512013FIDE Grand Prix ZugC63 Ruy Lopez, Schliemann Defense
5. Karjakin vs Nakamura ½-½1072013FIDE Grand Prix ZugC11 French
6. Morozevich vs Kasimdzhanov 1-0532013FIDE Grand Prix ZugA07 King's Indian Attack
7. Morozevich vs Mamedyarov ½-½292013FIDE Grand Prix ZugD90 Grunfeld
8. Nakamura vs Giri ½-½712013FIDE Grand Prix ZugE11 Bogo-Indian Defense
9. Radjabov vs Karjakin ½-½392013FIDE Grand Prix ZugD85 Grunfeld
10. Kasimdzhanov vs Kamsky 1-0492013FIDE Grand Prix ZugC03 French, Tarrasch
11. Topalov vs Leko 1-0582013FIDE Grand Prix ZugD38 Queen's Gambit Declined, Ragozin Variation
12. Ponomariov vs Caruana 1-0772013FIDE Grand Prix ZugC78 Ruy Lopez
13. Kamsky vs Topalov ½-½572013FIDE Grand Prix ZugB51 Sicilian, Canal-Sokolsky (Rossolimo) Attack
14. Giri vs Radjabov ½-½192013FIDE Grand Prix ZugE06 Catalan, Closed, 5.Nf3
15. Mamedyarov vs Kasimdzhanov  ½-½472013FIDE Grand Prix ZugD31 Queen's Gambit Declined
16. Caruana vs Morozevich  ½-½462013FIDE Grand Prix ZugC92 Ruy Lopez, Closed
17. Karjakin vs Ponomariov  ½-½542013FIDE Grand Prix ZugC45 Scotch Game
18. Leko vs Nakamura  ½-½592013FIDE Grand Prix ZugC11 French
19. Kasimdzhanov vs Topalov ½-½432013FIDE Grand Prix ZugE94 King's Indian, Orthodox
20. Ponomariov vs Giri  ½-½372013FIDE Grand Prix ZugC78 Ruy Lopez
21. Mamedyarov vs Caruana ½-½462013FIDE Grand Prix ZugD85 Grunfeld
22. Morozevich vs Karjakin ½-½462013FIDE Grand Prix ZugE34 Nimzo-Indian, Classical, Noa Variation
23. Radjabov vs Leko ½-½792013FIDE Grand Prix ZugD38 Queen's Gambit Declined, Ragozin Variation
24. Nakamura vs Kamsky ½-½702013FIDE Grand Prix ZugD10 Queen's Gambit Declined Slav
25. Topalov vs Nakamura 1-0722013FIDE Grand Prix ZugC78 Ruy Lopez
 page 1 of 3; games 1-25 of 66  PGN Download
  REFINE SEARCH:   White wins (1-0) | Black wins (0-1) | Draws (1/2-1/2)  

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 20 OF 21 ·  Later Kibitzing>
May-02-13  badest: How much does one need to deviate from engine moves in order not to be considered a cheat?

Just asking ...

May-02-13  nimh: <pbercker>

True, I agree with you. I can add two more traits. Unusually high tolerance for the difficulty of positions; and indifference about how logical or normal moves appear to be.

It's a common knowledge that engines usually play relatively less accurately in 'simple' positions, and they excel in complications compared to humans.

An engine might want to play a rook lift to the third rank, or place a knight on the rim, while there are a lot of more 'normal' equally good moves present as well.

The most famous example I know is as follows:

C Allwermann vs S Kalinitschew, 1999


click for larger view

<I showed the position to a number of players in Wijk aan Zee, and all gave me simple wins – for instance 31.Rxb7, 31.Rd7 or even 31.Rxf6. Remember, the first time control is looming and tournament victory is in grasp.

So what does our hero play? 31.Qa7?!! “Fritzy!” squealed Anand and went into uncontrollable fits of laughter when he saw this and the following moves (I filmed his mirth and included it in my multimedia report in ChessBase Magazine 69). He and the other players immediately recognised the “hand” of the computer.>

http://en.chessbase.com/Home/TabId/...

May-02-13  sofouuk: <AylerKupp>here is one article <http://www.chess.com/article/view/c...>that gives two examples of his computer-like play, do play through the whole game before you draw any conclusions. but the really damning point is not that he plays so well, but that his performance level has fluctuated so ridiculously:

(reposted from <Borislav Ivanov>)<<galdur>You don't swing from 2000 to 2700 to 2000 back and then 2700 again within a year or less. It's a zillion to one. But the question is how does he do it.>

May-02-13  sofouuk: on the other hand, ivanov's most famous quote is: <Of course I practiced a lot with the computer, and after beating Rybka and Houdini by 10-0 each, i was absolutely sure that no-one was gonna stop me winning> - it's pretty clear he was trying, very unsuccessfully, to be funny, but it's usually quoted as if he was being serious, in an effort to make him look like a liar and a fantasist
May-02-13  badest: In any case, it is very wrong by Chessbase to post his name without any concrete evidence.

I didn't think there is "yellow press" in the chess community, but obviously there is.

May-02-13  pbercker: @ <sofouuk>
Thanks for the link. Quite revealing ... have a look at this one ... many move for move correlation with houdini 2c but he tries to give Ivanov the benefit of the doubt ...

http://www.youtube.com/watch?v=cx0n...

it's partly in response to Lilov's video analysis of Ivanov "alleged" cheating ...

May-02-13  pbercker: <Aylerkupp: So they are effectively saying that no player, regardless of rating, can base their next move on the basis of their previous move and that the results assume that these agents representing players, even those rated at 2700, are incapable of following plans or seeing more than one move ahead. But this assumption is required in order to be able to use their statistical techniques even though they acknowledge that "It stands in contradistinction to the reality that an actual player’s consecutive moves are often part of an overall plan." >

To the best of my (limited) understanding:

I think the idea is that agent A is a model for <what> a potential human player P does, namely selecting the best move of some available number of moves with some increasing probability the stronger the player is, but does not model <how> a human sometimes (or often) does, namely by following a retrospective plan, though often also just through prospective calculation of some number of moves ahead.

Consider a human being who has a memory span of a few minutes only, so that making plans is impossible, since they are forgotten. However, if that human being had exceptional calculating abilities and see many moves ahead, he could could play excellent chess (sadly, there are indeed such cases).

Rough analogy: using crash test dummies to simulate <what> happens to humans in a crash, not <how> a human crashed the car (drunk driving, or whatever).

May-02-13  benjinathan: <AylerKupp> wouldn't a computer on low ply be possibly a better way to catch a cheater, depending on how the cheating is happening?

For example, I am playing a game and decide to go to the washroom to look at my pocket computer. I do not have time to let the software get to high ply and therefore play the move that comes in after a minute or so of calculating.

I am not sure how letting a computer get to high ply helps cheaters since it would seem impractical for the cheater's computer to ever get that high.

Or perhaps I am missing something.

I wonder how many times/moves a player would have to cheat on in order to go fronm a 2000 elo to 2700 elo? I guess it would depend on the position/ type of game.

May-02-13  pbercker: <aylerkupp: The third model, the one actually used, made the projections more accurate after they were able to "motivate it". Frankly, if I have an expectation as to what the model should predict and I am allowed to "motivate it", I'm not at all surprised that a "miracle" occurred and there was a good correlation between the actual and predicted results.>

I think you have the wrong sense of what it means to "motivate it" here. This is typical "academe speak", but it essentially means "justify", i.e. "our motive for using this methodology is so and so ..."

All they're doing is trying to find the best curve to fit the data, and there's lots of different ways of doing so as you can see below. Note that the data is fixed and unalterable; it's the best curve that's subject to change.

It would be a problem if it went the other around, namely that if they somehow forced the data into a preexisting curve they favored ahead of time. That wouldn't me a miracle ... it would just be cheating!

http://en.wikipedia.org/wiki/Curve_...

May-02-13  pbercker: @ <benjinathan>

The idea is that calculating at high ply level will return a list of possible moves ordered according to valuation, from high to low. If the human player's move is the same as, say, the 3 highest scoring moves again and again and again as calculated by a very strong engine, that should raise suspicion that cheating may have occured.

Given that he beat several GM, that essentially rules out that IF a computer was used, that it was used on low-ply, since that would not likely beat even an average GM.

It's unlikely that IF he used a computer, that he had access to it directly. More likely he would have a confederate somewhere on the outside to whom he communicates the moves, and who communicates the computer moves in return. It's interesting to note that a game against a GM was going quite well for him until suddenly the live feed of the games was turned off, and he lost his game shortly after that! That suggest that all he needed was a one way communication from his confederate to himself.

May-02-13  benjinathan: <pbercker> But would not the list of candidates be different at high depth than at low depth? perhaps vastly different?

<IF a computer was used, that it was used on low-ply, since that would not likely beat even an average GM.>

You could be right, but I wonder about that. Wouldn't Rybka at depth 18(the usual depth on the live broadcasts at cessok) usualy beat a 2500-2650 GM? I don't know the answer to that.

May-02-13
Premium Chessgames Member
  KWRegan: Besides the responses especially by "pbercker"---which I second and greatly appreciate---here are a few answers and comments:

(1) If you run engines in multi-thread mode, or with different (power-of-2) hash-table sizes or different multi-pv settings, or for different amounts of time, you will indeed get different values and often different "best moves".

(2) But if you run in single-thread mode to a fixed search depth, with the same hash and multi-pv and deterministic parameters (which Rybka and Houdini and Stockfish and Toga II etc. have ALWAYS had), you will generally get the same results. This helps for internal consistency and reproducibility. Of course cheaters' accomplices would be doing multi-thread, variable-time (including looking ahead while opponent is thinking), single-pv mode with I-don't-know-what hash size... But we cope with this difference because the approach is statistical to begin with.

(3) AylerKupp, I indeed have results on Sofia Polgar from 1989, and others being talked about, at http://www.cse.buffalo.edu/~regan/c... The difference from what one might regard as "proven" is clear!

(4) My theory has internal as well as external consistency checks, such as from multiple fitting procedures. My own first curve in 2008 was very bad, and Haworth previously had a similar one, but my own second curve was good(!), similar to how "equal temperament" makes all the world's pianos slightly out of tune but good enough.

(5) Rybka bunches the bottom 4 levels of the search into 1, so what it reports as depth 13 is more like 16 or 17 for comparison to other engines. In fixed-depth engine-match tests Rybka 3 is similar in strength to Houdini 3 depth 17 and Stockfish 2.3.1 depth 19-20 (I use 19 for speed and to have an odd search depth; I may go up to 21). Larry Kaufman estimated Rybka 3 depth 14 as Elo 2750; Rybka 4.1 reaches this depth faster but is a little weaker; I think it's really 2900+ in early middle game sliding down under 2500 in endgames where depth 13 happens really fast. Guid and Bratko did Rybka 3 only to depth 10 (http://en.chessbase.com/home/TabId/...) which is estimated over 200 Elo weaker. We have had no joint participation.

(6) The main flaw with the Guid-Bratko work---still in their latest paper---is that it does not adjust for the overall level of challenge in a game---or they try to with their "complexity" measure but it is artificial and weak. To do so, IMHO one needs to analyze all reasonable moves to equal thoroughness, such as in Many-PV mode. Nor does their work observe an important scaling effect with the overall evaluation, nor is it capable of producing internal confidence intervals ("error bars"). It is OK for relative ranks but not for an absolute quality measure. They do not have remotely near as much comparison data for scientific control.

They necessarily have the same assumption of independence of moves. I agree with AylerKupp's critique of that, but it should not cause bias. Rather the main effect should (only) be a lowering of the effective sample size (i.e., the number N of moves), which is reflected by an empirically-based adjustment I make to the internal error bars.

I'll respond to new questions or to the above in greater depth if I have time---this Morozevich kerfuffle is already diverting my time during my exams week.

May-02-13  nimh: <KWRegan>

When will your research include taking into account changes in the accuracy of play due to different time controls and difficulty of positions? If possible, practical play (sacrificing the accuracy of play in favour of other goals, such as, for example, getting easier positions to play, or rejecting lines that lead to a drawish game in a must-win situation) should also be included, but since it's not easy, it's not a very big sin to omit that.

Everyone who has even a slight experience in playing chess knows that chess skill is not the only factor that determines the accuracy of play. Your research would gain a lot in credibility if you inlcuded methods for ascertaining other factors.

May-02-13
Premium Chessgames Member
  KWRegan: I have evidence that Blitz (5'/game or 3' + 2"/move, both give the same time to 60 moves) depresses quality about 575-600 Elo for the top players, the 25' + 10"/move (= 35' for 60) used in recent Ambers about 200 Elo, and the faster World Rapid Championships pace of 15' + 10"/move (= 25' for 60) about 300 Elo. An in-depth study of games between 2600 players in all years shows about 15-20 Elo quality reduction from the 1980's to the 2000's, which I ascribe to the faster time controls and elimination of adjournments (rather than "inflation"). Publishing this will await my current full integration of Houdini and Stockfish and maybe other engines into my "panel". Yes, then I will know exactly how to adjust my quality measures to the time control...including my current work on measures that reward risk-takers for creating "challenge". And it would be nice if others could help gather data---current directions at http://www.cse.buffalo.edu/~regan/c...
May-02-13  Just Another Master: <nimh> Is that some special avatar it scares me. <KWRegan> interesting stuff, thx for the knowledge, cheers.
May-03-13
Premium Chessgames Member
  HeMateMe: The Zugs "Kill for Peace".

<http://www.youtube.com/watch?v=p7VQ...>

May-03-13  Eyal: <<nimh> Is that some special avatar it scares me.>

Nobody expects the Spanish Inquisition!

http://www.youtube.com/watch?v=vt0Y...

May-03-13  nimh: I didn't know it was from Monty Python whe I picked an avatar at random. :)

<KWRegan>

Thanks, it's good to hear that you're going to include methods for adjusting the accuracy of play according to time controls. But what about difficulty of positions? This factor is even more important than time controls.

May-03-13
Premium Chessgames Member
  KWRegan: Difficulty / "challenge created" is exactly what the in-progress upgrade to my work is trying to capture. It adds a depth parameter that tracks whether and which moves "swing up" or "swing down" in value as the depth increases. Difficulty is related both to the amount of "swing" and to the gradient of values already present in my current model.
May-03-13
Premium Chessgames Member
  AylerKupp: <sofouuk> OK, I read the article. I have to say that it's been a long time since I had a rating but at my best it was no better than the low 1800s. But in the first game I saw 31.Qa7 in less than 2 seconds since it seemed very logical; Black king in corner, vulnerable first rank, and overloaded rook. And, yes, it was the first move that came to my mind. Why consider prosaic moves like 31.Rxb7? So I am not surprised that Allwermann saw it.

As far as the second game yes, I agree that pointing out a forced mate in 8 is not the type of thing that one thinks about when your opponent is resigning. But that doesn't mean that the forced mate was not calculated earlier and Allwermann was just pointing it out after his opponent resigned.

As far as the third game (the first cited Ivanov game) I don't agree that 22.Ra1 is "pretty similar" or "almost a carbon copy" to 31.Qa7 from Allwerman's first game. Not even close. Black's king is not trapped in the corner, the first rank is not vulnerable, and there are multiple pieces in the way. So it may perhaps look "pretty similar" to GM Gserper but I certainly wouldn't find 22.Ra1 in a million years. That's one reason, probably among many, that he is a GM and I am not.

All of this, of course, doesn't prove that Ivanov was not cheating. I don't know what country you are from but in mine (USA) a person is presumed innocent until proven guilty. And, in order to be proven guilty in US criminal law, the prosecution (accusers) must show that the accused had motive, opportunity, and means. That may not be consider right by all, particularly if the accuser is accused of a particularly bad crime and emotions are high, but it does make sense to me, even if it is different in other countries.

Ivanov certainly had motive (winning a tournament or at least doing better than expected) and he certainly had opportunity (playing in the tournament). But what was his means? If the accusers believe that Ivanov was cheating, how are they saying that he accomplished this? Without showing how he cheated, they have no proof. And without proof accusations of cheating are, at best, irresponsible and, at worst, subject to a libel suit.

I have found more articles and several videos that I need to watch and I haven't made up my mind with regards to Ivanov's cheating, but if "the question is, how does he do it" has not been answered, then as far as I'm concerned there is insufficient evidence to show that he was cheating. And I certainly wouldn't take what I consider flawed computer analysis as "proof".

But maybe there is more definitive proof in the articles and videos that I have not yet seen. I do think that Ivanov did not help himself by his responses to his interview by Maria Grigoryan in http://en.chessbase.com/Home/TabId/.... He tried to make fun of the entire situation and he apparently doesn't realize that this can potentially be very damaging to his reputation and that it needs to be treated seriously. Oh well, he's only 25 years old, he has time to learn.

May-03-13
Premium Chessgames Member
  AylerKupp: <benjinathan> It really doesn't have anything to do with cheating, I was addressing the issue of trying to determine a player's strength by comparing the moves that he/she makes with the moves that a computer considers best. The more often that the player does not make the move considered to be best by the computer, the weaker the player is perceived to be.

In these cases the analyses have been done by comparing the computer's considered "best move" with the move that the player actually played, which presumably is the move that the player thought was the best move. This is not necessarily the case (Lasker comes immediately to mind) but I would think that it is the case the far majority of the time.

Now, the move that the computer considers best at low search ply may simply not be the objectively best move. Running it to a high search ply doesn't guarantee that it is the best move either, but I would say that my confidence that a computer's selected move is indeed the best in a given position is greater if the search is conducted at high ply than if it is stopped at low ply. Sure, there will always be exceptions but that is likely to be the case.

Now, if a player plays a move other than what the computer selects as the best move, the player is considered relatively weaker. But I am not ready to accept that the computer's move when limited to a low search ply is stronger than the move that the player selected. I am more willing to accept that situation at higher search plies and very often the computer's selected best move at a higher ply will be different than the computer's selected best move at lower plies. I have seen many instances in engine vs. engine tournaments when an engine's play, most often Rybka's, just falls apart when in time trouble (which seems to happen to Rybka a lot), its search ply is limited.

As far as how many times/moves a player would have to cheat on in order to go from a 2000 Elo to 2700 Elo, it is due entirely on the player's opponents ratings and how badly they played relative the player. Elo ratings are calculated on the basis of the difference between a player's rating and his opponent's rating and the game results. That and the so-called "K-factor" which in turn is a function of the player's rating. The type of position or game doesn't matter at all.

May-03-13
Premium Chessgames Member
  AylerKupp: <pbercker> I think that I understand that a (fallible) agent A is intended to be a model for what a potential human player P does. My point is that discarding any previously available information is <not> what potential player P does, certainly not if P is of master strength (as most of the research has been applied to). So determining the "best" move by making use of available information (previous analyses) can make a (strong) human player (much) stronger than a computer, particularly if the computer is limited to a fixed search depth. As an example, match an opening theory-savvy human against a computer deprived of an opening book and see the results you get.

And, of course, in the case of dummies crashing there is no a priori information available to the dummies to help them reduce the amount of injury in a crash, so I don't think that is a good analogy in this case. Except, sadly, the description of "dummies crashing" applies to my chess game all too well.

More later, this is getting interesting, although I would think that it is off-topic for this particular page. But I can't suggest an alternative.

May-03-13
Premium Chessgames Member
  AylerKupp: <KWReagan> Thank you for joining this discussion and taking the time to provide additional data, particularly during exams week. Nothing like getting information from "the horse's mouth" if you know what I mean. I will digest your posts tomorrow since after a bottle of Chardonnay I'm not sure that I could do them justice. And I hope that you accept that my remarks are based on healthy skepticism about achieving confidence in the analysis results and player strength assessments when engines are run at a low fixed depth search and nothing more.

Don't ask me why but on my way to dinner tonight I had two ideas that I wonder if you ever considered:

(1) You have indicated in your papers that you have run tests on 150,000+ games databases. Today multi-million game databases are not that uncommon; as of today ChessTempo's games database contains more than 2.5M games, and includes more than 800K games where both players are rated 2200+.

So my thought is: If instead of your 150,000+ games database you were to select N games from the 2.5M+ games database at random, how many games would need to be analyzed in order to have, say, + 5% confidence in the accuracy of the results? If the number were reasonably small (and I will let you decide what "reasonably small" means to you), would it be feasible to run your analyses at substantially higher fixed search depths (say d=24 for Rybka, d=28-29 for Houdini, and d=33 for Stockfish) to see if they match your results at lower search depths? That would seem to validate the concept that, within reason, the search depth is not a significant factor.

(2) Arranging a match between one or more master-level players (the more the better, of course) and Rybka 3 , with the master-level players restricted to classical time controls (either 40 moves in 2 hours or 40 moves in 90 minutes) and the Rybka 3 limited to a search depth = 13 (with PONDER=OFF, of course). Then, presumably, the "entity" that won the most games is playing the best moves. So, if the chess engine were to win a large majority of the games then it would indicate that, indeed, using a chess engine under those conditions is a suitable way of assessing a player's strength by comparing its moves against the engine's. If, on the other hand, the human player were to win the large majority of the games, then at least in my mind it would not be proper to use the engine's "best" moves as a benchmark for assessing the human player's overall strength. If the results of the match were evenly balanced, well, that would be the subject for another discussion.

Best of all, IMO, would be to arrange such a match between Rybka 3 and Mr. Ivanov and see if he could still beat Rybka 3 by a score of 10-0 (which I suspect was also said tongue-in-cheek).

And, BTW, what is the "Morozevich kerfuffle" ?

May-04-13  badest: <AylerKupp ... So my thought is: If instead of your 150,000+ games database you were to select N games from the 2.5M+ games database at random, how many games would need to be analyzed in order to have, say, + 5% confidence in the accuracy of the results? If the number were reasonably small ...> It's been 37 years since I had statistics, but I think that the sample size can be very small, e.g. about a 1000 for 3% confidence interval. See:

http://en.wikipedia.org/wiki/Sample...

<(2) Arranging a match between one or more master-level players (the more the better, of course) and Rybka 3 , with the master-level players restricted to classical time controls (either 40 moves in 2 hours or 40 moves in 90 minutes) and the Rybka 3 limited to a search depth = 13 ...> Now that would be really interesting. However, don't you think 13 is way too low?

May-04-13
Premium Chessgames Member
  KWRegan: <AylerKupp,badest> (1) For what I actually did, in a more controlled situation with samples of size 10,000, see the later parts of http://rjlipton.wordpress.com/2011/... (in "another life" I partner on a prominent Math/CS weblog---you may find other articles there of interest). I would be delighted to run more games with needed volunteer help and PC cores---to give an idea of what's involved, see what Guid and Bratko say in http://en.chessbase.com/home/TabId/... on why they stopped Rybka 3 at depth 10. My depth 13 is about (2.4)-cubed = 12x as much work per move (in MultiPV mode make that about 200x per move), and I've run many, many more moves...

(2) You'd have to pay a lot for small data, and even so it would be under exhibition/simulation conditions, whereas the data I use is all from real competition. In comp-on-comp fixed-depth matches, Rybka 3's depth 13 comes out roughly par with Houdini 3 depth 17 and Stockfish 2.3.1 depth 19-20. Rybka groups the bottom 4 levels of its search into 1, and there are strong whispers that its reporting obfuscates, so many believe the depth is "really" 16 or 17. I wish the engine-rating services would conduct fixed-depth matches, though the loss of strength/relevance in endgames is a knock against the concept.

Jump to page #    (enter # from 1 to 21)
search thread:   
< Earlier Kibitzing  · PAGE 20 OF 21 ·  Later Kibitzing>

NOTE: Create an account today to post replies and access other powerful features which are available only to registered users. Becoming a member is free, anonymous, and takes less than 1 minute! If you already have a username, then simply login login under your username now to join the discussion.

Please observe our posting guidelines:

  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, duplicate, or gibberish posts.
  3. No vitriolic or systematic personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No cyberstalking or malicious posting of negative or private information (doxing/doxxing) of members.
  6. No trolling.
  7. The use of "sock puppet" accounts to circumvent disciplinary action taken by moderators, create a false impression of consensus or support, or stage conversations, is prohibited.
  8. Do not degrade Chessgames or any of it's staff/volunteers.

Please try to maintain a semblance of civility at all times.

Blow the Whistle

See something that violates our rules? Blow the whistle and inform a moderator.


NOTE: Please keep all discussion on-topic. This forum is for this specific tournament only. To discuss chess or this site in general, visit the Kibitzer's Café.

Messages posted by Chessgames members do not necessarily represent the views of Chessgames.com, its employees, or sponsors.
All moderator actions taken are ultimately at the sole discretion of the administration.

Spot an error? Please suggest your correction and help us eliminate database mistakes!
Home | About | Login | Logout | F.A.Q. | Profile | Preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | New Kibitzing | Chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | Privacy Notice | Contact Us

Copyright 2001-2025, Chessgames Services LLC