Members · Prefs · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing User Profile Chessforum
Member since Dec-31-08 · Last seen Dec-21-14
About Me (in case you care):

Old timer from Fischer, Reshevky, Spassky, Petrosian, etc. era. Active while in high school and early college, but not much since. Never rated above low 1800s and highly erratic; I would occasionally beat much higher rated players and equally often lose to much lower rated players. Highly entertaining combinatorial style, everybody liked to play me since they were never sure what I was going to do (neither did I!). When facing a stronger player many try to even their chances by steering towards simple positions to be able to see what was going on. My philosophy in those situations was to try to even the chances by complicating the game to the extent that neither I nor the stronger player would be able to see what was going on! Alas, this approach no longer works in the computer age. And, needless to say, my favorite all-time player is Tal.

I also have a computer background and have been following with interest the development in computer chess since the days when computers couldn't always recognize illegal moves and a patzer like me could beat them with ease. Now its me that cant always recognize illegal moves and any chess program can beat me with ease.

But after about 4 years (a lifetime in computer-related activities) of playing computer-assisted chess, I think I have learned a thing or two about the subject. I have conceitedly defined "AylerKupp's corollary to Murphy's Law" (AKC2ML) as follows:

"If you use your engine to analyze a position to a search depth=N, your opponent's killer move (the move that will refute your entire analysis) will be found at search depth=N+1, regardless of the value you choose for N."

Im also a food and wine enthusiast. Some of my favorites are German wines (along with French, Italian, US, New Zealand, Australia, Argentina, Spain, ... well, you probably get the idea). One of my early favorites were wines from the Ayler Kupp vineyard in the Saar region, hence my user name. Here is a link to a picture of the village of Ayl with a portion of the Kupp vineyard on the left:

You can send me an e-mail whenever you'd like to aylerkupp(at)

And check out a picture of me with my "partner", Rybka (Aylerkupp / Rybka) from the Masters - Machines Invitational (2011). No, I won't tell you which one is me.


Analysis Tree Spreadsheet (ATSS).

The ATSS is a spreadsheet developed to track the analyses posted by team members in various on-line games (XXXX vs. The World, Team White vs. Team Black, etc.). It is a poor man's database which provides some tools to help organize and find analyses.

I'm in the process of developing a series of tutorials on how to use it and related information. The tutorials are spread all over this forum, so here's a list of the tutorials developed to date and links to them:

Overview: AylerKupp chessforum

Minimax algorithm: AylerKupp chessforum

Principal Variation: AylerKupp chessforum

Finding desired moves: AylerKupp chessforum

Average Move Evaluation Calculator (AMEC): AylerKupp chessforum


ATSS Analysis Viewer

I added a capability to the Analysis Tree Spreadsheet (ATSS) to display each analysis in PGN-viewer style. You can read a brief summary of its capabilities here AylerKupp chessforum and download a beta version for evaluation.


Chess Engine Evaluation Project

Some time ago I started but then dropped a project whose goal was to evaluate different engines' performance in solving the "insane" Sunday puzzles. I'm planning to restart the project with the following goals:

(1) Determine whether various engines were capable of solving the Sunday puzzles within a reasonable amount of time, how long it took them to do so, and what search depth they required.

(2) Classify the puzzles as Easy, Medium, or Hard from the perspective of how many engines successfully solved the puzzle, and to determine whether any one engine(s) excelled at the Hard problems.

(3) Classify the puzzle positions as Open, Semi-Open, or Closed and determine whether any engine excelled at one type of positions that other engines did not.

(4) Classify the puzzle position as characteristic of the opening, middle game, or end game and determine which engines excelled at one phase of the game vs. another.

(5) Compare the evals of the various engines to see whether one engine tends to generate higher or lower evals than other engines for the same position.

If anybody is interested in participating in the restarted project, either post a response in this forum or send me an email. Any comments, suggestions, etc. very welcome.


Ratings Inflation

I have recently become interested in the increase in top player ratings since the mid-1980s and whether this represents a true increase in player strength (and if so, why) or if it is simply a consequence of a larger chess population from which ratings are derived. So I've opened up my forum for discussions on this subject.

I have updated the list that I initially completed in Mar-2013 with the FIDE rating list through 2013, and you can download the complete data from It is quite large (101 MB) and to open it you will need Excel 2007 or later version or a compatible spreadsheet since several of the later tabs contain more than 65,536 rows.

The spreadsheet also contains several charts and summary information. If you are only interested in that and not the actual rating lists, you can download a much smaller (594 KB) spreadsheet containing the charts and summary information from here: You can open this file with a pre-Excel 2007 version or a compatible spreadsheet.

FWIW, after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis and it goes something like this:

1. Initially (late 1960s and 1970s) the ratings for the strongest players were fairly constant.

2. In the 1980s the number of rated players began to increase exponentially, and they entered the FIDE-rated chess playing population mostly at the lower rating levels. The ratings of the stronger of these players increased as a result of playing weaker players, but their ratings were not sufficiently high to play in tournaments, other than open tournaments, where they would meet middle and high rated players.

3. Eventually they did. The ratings of the middle rated players then increased as a result of beating the lower rated players, and the ratings of the lower rated players then leveled out and even started to decline. You can see this effect in the 'Inflation Charts' tab, "Rating Inflation: Nth Player" chart, for the 1500th to 5000th rated player.

4. Once the middle rated players increased their ratings sufficiently, they began to meet the strongest players. And the cycle repeated itself. The ratings of the middle players began to level out and might now be ready to start a decrease. You can see this effect in the same chart for the 100th to 1000th rated player.

5. The ratings of the strongest players, long stable, began to increase as a result of beating the middle rated players. And, because they are at the top of the food chain, their ratings, at least so far, continue to climb. I think that they will eventually level out but if this hypothesis is true there is no force to drive them down so they will stay relatively constant like the pre-1986 10th rated player and the pre-1981 50th rated player. When this leveling out will take place, if it does, and at what level, I have no idea. But a look at the 2013 ratings data indicates that, indeed, it may have already started.

You can see in the chart that the rating increase, leveling off, and decline first starts with the lowest ranking players, then through the middle ranking players, and finally affects the top ranked players. It's not precise, it's not 100% consistent, but it certainly seems evident. And the process takes decades so it's not easy to see unless you look at all the years and many ranked levels.

Of course, this is just a hypothesis and the chart may look very different 20 years from now. But, at least on the surface, it doesn't sound unreasonable to me.

Any comments, suggestions, criticisms, etc. are both welcomed and encouraged.

------------------- Full Member

   AylerKupp has kibitzed 7382 times to chessgames   [more...]
   Dec-21-14 AylerKupp chessforum
AylerKupp: <isemeria> Thanks for taking the time to think about and comment on some of my thoughts. I have some thoughts in return. I will first say that the evaluation function is not necessarily the essential thing that makes the difference in engines' playing strength. Sure, it's an
   Dec-21-14 The World vs Naiditsch, 2014 (replies)
   Dec-17-14 truefriends chessforum (replies)
   Dec-14-14 Analysis Forum chessforum (replies)
   Dec-11-14 Amsterdam Interzonal (1964) (replies)
AylerKupp: <ColdSong> After Curacao 1962 Fischer accused the Russians of collusion ("The Russians Have Fixed World Chess, ) so that only a Russian (I assume he meant the Soviets since Petrosian, winner of the WCC in 1963, was Armenian and not ...
   Dec-03-14 Fischer vs J Dedinsky, 1964
AylerKupp: Ouch! An obvious oversight, understandable in a simultaneous exhibition. Then again, of all the openings he faced, Fischer seemed to have the most trouble against the French defense.
   Dec-03-14 Fischer vs H Meifert, 1964
AylerKupp: A great effort by Meifert, repeatedly giving up material against Fischer in order to ensure active play for his pieces. A lesson for all of us or at least for me.
   Dec-01-14 Kosteniuk vs N Pogonina, 2014 (replies)
AylerKupp: I find it humorous that the q-side pawn formation after 13.a3 is the same as in The World vs N Pogonina, 2010 . Maybe Kosteniuk has been studying (and improving on) our games? :-)
(replies) indicates a reply to the comment.

De Gustibus Non Disputandum Est

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 47 OF 47 ·  Later Kibitzing>
Premium Chessgames Member
  AylerKupp: <alan517> Sorry for the delay in responding. I don't check my Profile Page often enough and I have been tied up in the Chessgames Challenge: The World vs Naiditsch, 2014 game as well as other "real life" things.

I don't have much experience with game databases. I know that the Chessbase database is supposed to be the standard and the largest (about 6 M games), but I don't like the fact that they give the statistics in terms of White's winning percentage and not the number of White wins, draws, and losses. For a particular move a White winning percentage of 55% could be based on 20% wins, 70% draws, and 10% losses; or it could be based on 50% wins, 10% draws, and 40% losses. The first one indicates that the move likely leads to a draw while the second one indicates that it will likely lead to a decision. Very different results from the same winning percentage!

I use 365Chess ( and ChessTempo ( 365Chess has about 3.3 M games in 2 databases, the "big database" and the masters database where both players are rated 2200+. ChessTempo is slightly smaller, about 2.9M games, but you can specify that they be filtered in increments of 100 Elo points; 2200+, 2300+, 2400+, etc. The 365Chess user interface is easier to use but doesn't have as many features, while the ChessTempo database has more filtering capabilities but is somewhat harder to use.

Both charge a nominal yearly amount for access to their full capabilities, $ 10.00/year for 365Chess and $ 20.00/year for ChessTempo. But you can try them both for free at the links I listed above, you just won't be able to go too many moves into an opening and (I suspect) that the filtering capabilities that they let you use are restricted.

For correspondence games I go directly to the ICCF game archives ( containing all correspondence games played since 2002 or so in *.pgn format. I wrote a parser that would convert the games from *.pgn format to *.csv format so that I can import them into Excel. I don't know if you are familiar with my Analysis Tree Spreadsheet (ATSS) which I describe in this Forum's header, but I have a similar version for games I call the Supplementary Games Spreadsheet (SGSS). It is my plan to create one of these spreadsheets for correspondence games for each year since 2002 and, once we are a few moves into a Team game, extract the relevant games from each of the years and combine them into an SGSS for the newly started Team game.

Alas, my parser currently has a bug that I have not had time to fix. It also doesn't handle comments embedded in the moves and that feature is necessary to be able to process all the correspondence games. I'll post a notice on this forum when the year-by-year SGSSs are available.

I am not sure what you mean about combining *.pgn files so that you can use them with Arena. The only time that I use *.pgn files with Arena is when I am running engine-vs.-engine tournaments when I specify the initial moves of the game in *.pgn format and tell Arena to start the tournament from that position. But maybe there are other ways to use *.pgn files in Arena that I don't know about. Arena has many capability that I am not familiar with!

Premium Chessgames Member
  alan517: Hi Ayler, Thank you for your advice and comments. I am using the Big Database from Chessbase with the Fritz GUI and Stockfish 5. I like it because the search has many options. I am also an old-timer, I learned the game in the 60s & 70s with Al Horowitz's books and a game collecton. I was an active USCF player until 1987 when my wife asked that I give up my chess books which included the ECOs and Informants. Now I am playing on I am preparing to play in an USCF tournament in October, it should be exciting! Also, I have a question about chess engines. What makes engine A better that Engine B. I know each engine has different programs. Just wondering what makes the engines different. Thanks again for your help!
Premium Chessgames Member
  AylerKupp: <alan517> I am glad that you are planning on playing in an USCF tournament in October. I have also thought about entering a tournament but I haven't played in so many years that I am afraid that I would make too many silly mistakes. I enjoy the analyzing I do at home with a much more relaxed "time control" so I have also thought about entering a correspondence chess tournament but I somehow find myself too busy to do either.

As far as why one chess engine is stronger than another I think that it has mainly to do with two things, it's evaluation function and its search function. Its evaluation function must be accurate, giving a correct evaluation of all different types of positions but it must also be fast. If it takes too much time to calculate an evaluation, no matter how accurate it is, then the engine will not be able to search as deep as its competition in the same amount of time and therefore be at a disadvantage in competitive games. So, like so many other things in life (like a wife and chess books, :-) ), it is important to have a good balance between accuracy and speed.

An engine's search function must also be efficient and "intelligent". It must know which branches of a search tree are not likely to contain good moves and throw them away (prune them) so that the positions that these branches have will not need to be calculated. The more branches of a search tree that are <properly> thrown away the deeper the engine can search in the same amount of time. But it is also important to have a good balance between aggressive pruning of its search tree (if too aggressive then good moves might be missed) and too conservative (if too conservative then the engine will not be able to search as deeply in a reasonable amount of time). I have read that most of the recent progress in chess engines in recent years has come from improvements in their search functions since computer hardware advances, although many, will not by themselves allow much increase in an engine's search depth in a given amount of time because the number of calculations needed increases too fast the deeper you search.

There are also many other reasons why one engine might be better than another, for example, the overall quality of the programming, but I think that those are the main two.

But we need to be careful when we decide that engine A is better than engine B. We get that impression when engines compete in engine vs. engine tournaments; if engine A beats engine B most of the time then we consider engine A to be the stronger. But these tournaments are usually played at fast time controls, 40 moves in 4 minutes and 40 moves in 40 minutes. Very rarely are tournaments played at classical time controls like 40 moves in 120 minutes because they simply take too long. And just because engine A is better than engine B at fast time controls does not necessarily mean that it will be better than engine B as slower time controls like the type we use in this game, although we assume that it will be. But we could be wrong.

Another thing that makes a difference between playing games and analyzing positions is that we rely on the engine's <absolute> evaluation (the actual number that it calculates about a position) in order to make our decisions as to which move is better than another. But for an engine to play well it is only necessary that its <relative> evaluation of two moves is correct so that it can pick which is the better move. So for an engine to select move X over move Y it is only necessary that move X's evaluation be better than move Y's, and their evaluations could be 10.1 vs. 10.0, 1.1 vs. 1.0, or 0.11 vs. 0.10 and the result will be the same; the engine will pick move X. But if we are making decisions about a particular move, it makes a big difference to us whether the engine's evaluation is 10.1, 1.1, or 0.1! But in practice I don't think that this is too big of a problem, but sometimes you will see one engine's evaluations being higher than another engine's. That is why I do multi-engine analysis and average the engine's evaluations, to try to remove any biases in one engine's evaluations.

Well, enough about engines. Good luck in your USCF tournament in October. And, when your wife asked that you give up your chess books, I hope that you "negotiated" some good things in return. :-)

Premium Chessgames Member
  AylerKupp: <<iatelier> I have certain doubts about tablebases. There are a few, and I doubt they are indentical.> and responses to other tablebase questions (part 1 of 2)

No, they are by no means identical. Each set of tablebases is different; Nalimov, Gaviota, Syzygy, Lomonosov, Scorpio, etc. And it's probably more appropriate to speak of a <set> of tablebases since each of the above consists of multiple files, one or more files for each type of position that they include; e.g. KRPvs.KR, KQN vs. KQ, etc. and each of these files is really a tablebase. But the nomenclature has stuck.

And each tablebase file is compressed in order to reduce storage space, and the schemes for compression differ in efficiency. Even the same set of tablebases (e.g. Gaviota) may have different compression schemes as the compression scheme is enhanced over time, typically for compression efficiency, although by now only compression scheme 4 (cp4) is likely to be used for Gaviota tablebases.

Finally tablebases contain different information or metrics. Nalimov, Gaviota, and Lomonosov tablebases contain Distance (or Depth) to Mate (DTM) information so they can be used to find out the shortest number of moves to mate (or draw) and what these moves are. That's why they require so much storage. And they often do not take into account the 50-move draw rule. And some of these tablebases contain more positions, at least in total; Gaviota tablebases are current limited to 5 pieces, Nalimov to 6-pieces, and Lomonosov to 7 pieces. Some Gaviota 6-piece tablebases and some Nalimov 7-piece tablebases have been generated, but not all.

Syzygy tablebases don't have DTM information, only Distance (or Depth) to Zero (DTZ) which tell you the number of moves required to mate or draw but not what these moves are. That's one reason (besides the compression scheme) why they are much smaller than the Nalimov and Gaviota tablebases.

Scorpio tablebases are of a type called bitbases. They contain win, draw, and lose information but not the number of moves required to achieve that result, so they are the most compact of all.

To give you an idea of the information contained in the tablebases affects their storage requirements, here is the amount of space needed to store 5-piece information for each of the databases I listed above:

Nalimov 7.05 GB
Gaviota 6.94 GB
Syzygy 0.94 GB
Scorpio 0.21 GB

Premium Chessgames Member
  AylerKupp: <<iatelier> I have certain doubts about tablebases. There are a few, and I doubt they are indentical.> and responses to other tablebase questions (part 2 of 2)

I'm not sure what you mean by "I doubt they are with all stored position". But if you are asking whether they cover, say, all possible 5-, 6-, or 7- piece positions the answer is no. They typically don't have 4+1, 5+1, or 6+1 positions since, say, any engine should be able to figure out how to mate with KQRN vs. K or even KBBNN vs. K without the use of tablebases, so these n+1 positions are typically not included.

And I don't think that any of these databases that use engine "on the fly" to generate the results for the simple reason that this would be too slow. For example, if you were to enter a 6-piece position in a Nalimov tablebase on-line site such as or you would get the results back almost instantaneously. If they were generated "on the fly" it would probably take hours to calculate, depending on the position.

Yes, there are engines such as FinalGen and Freezer that you can use to generate your own tablebases but they have their limitations. I am most familiar with FinalGen which can calculate the results of many endgames consisting of kings, one piece for each side, and (I guess) up to 8 pawns each. But some of the positions which meet this criteria are too complicated for it so solve, and others require a lot of time (thousands of hours) and disk space (100s of TB) and so it is not practical to use FinalGen to try to determine the result of these positions.

To answer the questions on your second post:

1. No, Nalimov tablebases are not the same as the Lomonosov tablebases. First of all no complete 7-piece Nalimov tablebases exist to my knowledge and their size has been estimated to require 70 to 200 TB of disk space (the 6-piece tablebases require about 1.2 TB of disk space). Lomonosov tablebase reportedly require more than 140 TB of storage.

2. Yes, the US has more powerful computers than Russia. Per, the US has the 2nd most powerful computer as of Jun-2014 (China has the fastest) and the fastest computer in Russia is ranked #42. But it is not enough to have the fastest computer, one must be willing to use it for this purpose. If you are asking why it hasn't been used for this purpose in the US, all I can think of is that getting time on this type of computer must be very, very expensive, and chess in the US is not nearly as popular as chess in Russia so there is likely to be much less interest. Oh, where is Fischer and the Cold War when we need them? :-)

Oct-06-14  tbentley: <Syzygy tablebases don't have DTM information, only Distance (or Depth) to Zero (DTZ) which tell you the number of moves required to mate or draw but not what these moves are. That's one reason (besides the compression scheme) why they are much smaller than the Nalimov and Gaviota tablebases.>

They tell you the number of moves to zeroing move (pawn move or capture), but they do tell you the move. (To be precise, the wdl files tell you number of moves and the dtz files tell you what the move is. The latter only need to be accessed when a tablebase position is reached; the former are used during search.)

Premium Chessgames Member
  AylerKupp: <tbentley> Thanks for the correction. It's hard (for me) to determine exactly what information these tablebases contain by looking at the available on-line information.
Premium Chessgames Member
  AylerKupp: <AgentRgent> I don't know when Kasparov started looking at the World Team's analysis. The earliest mention of it in the book was in the discussion of 11.Nd5 where Kasparov casually mentions "After I played Nd5 and saw that all the analysts recommended 11...Qxe4, I was worried." And "I sat down in my hotel room to study the position on my own. I looked at the World's latest recommendations, and I read Irina Krush's commentary." followed by a discussion of her analysis.

Just prior to that in his discussion of 10.Nde2 he indicated that "So far, everything was going according to plan. The game was still following the path of past experience, so little thought had been required up to this moment." It was during this discussion that he mentions that "10.Nde2 was the last move I made from Moscow. I was sitting in m study having just sent it down the line when it suddenly struck me: ' What about Qe6?'" He says that he started going through variations in his head and concluded by saying "It was a moment not of fear, but of apprehension. Then I reasoned, 'Fine, if it happens, I will figure it out'."

So it's not unreasonable to suspect that he started looking at the World Team's analysis after 10.Nde2. He might have looked at it earlier but, if he did, he probably didn't pay much attention to it.

The following is an interesting commentary during his discussion of 11.Nd5 which may have applicability to this game if GMARK plays 23...Nxd5: "I considered the variation 11...Qxd4 12.Nc7= Kd7 13.Nxa8 Qxc4 14.Nc3 Rxa8 15.Re1 Kc7 16.h3 Rc8 17.Be3 Kb8 18.Rc1, and White is better because Black's king is still a bit exposed. The material balance of a knight and two pawns against a rook is potentially favorable for Black, but it is difficult to restore coordination, so White has the better position." Of course, that assessment refers to a middlegame position with many more pieces remaining on the board.

Later, during the discussion about 12...Kd7, Kasparov has this to say in response to a comment by Boris Alterman indicating that Khalifman was involved in the game: "I had not been looking at the website in great detail so I simply hadn't appreciated the number of people on the World Team working against me, and in the case of Khalifman and his friends in St. Petersburg, the quality and depth of the analysis. Anyway, although the game hadn't gone exactly as I had wanted it to, I didn't sense any real danger."

And this is an "interesting" perspective from the discussion of 15.Nc3: "Many people on the bulletin boards said <during the game> (emphasis mine) that I had an unfair advantage because I could see the World's analysis. That was absolutely correct. That gave me an advantage I wouldn't describe it as unfair though; it balanced the struggle. There were three of us with three computers versus thousands of them with hundreds of computers, so just the amount of positions they could analyze was immense. There was a chance that I could suddenly find myself in dire difficulties and it would simply be too late. There would be no blunder, no favor returned. From this moment on I realized we would have to work day and night to avoid defeat."

So it seems that if Kasparov is correct, at least some members of the World Team were aware that he had access to their analysis. Kasparov's rationalization is also interesting for emphasizing that he had a team of 2 additional people working with him (night and day!) and so perhaps the proper title for his book should have been "Kasparov and a few friends against the World". But I don't think that would sell as well. :-)

In fairness the World Team had the assistance of several grandmasters like Speelman, Khalifman, and Bacrot. And Krush would eventually (recently) become a grandmaster. Kasparov had this to say about it: "My point was that this game had long ceased to be an event where ordinary players could have their say. Everyone was following the suggestions from Irina and her group. It had turned into a tough professional game, and I did not like the pretense that it was otherwise." Do you agree with that assessment?

At any rate the game was lost for Black after 55.Qxb4 per the 6-piece Nalimov tablebases which indicate that White mates in 82 moves, and the longest non-pawn-move sequence is 39 moves. And at the position prior to the final controversy resulting from Krush's late recommendation submittal after 58.g6 White has a mate in 79 moves with the longest non-pawn-move sequence also 39 moves.

Premium Chessgames Member
  AylerKupp: OK, here are the tablebase wins per for those 2 positions in Kasparov vs The World, 1999. For some reason it lists almost all the moves but not all of them.

After 55.Qxb4:

click for larger view

Mate in 82 moves:

1...Qf3+ 2.Kg7 d5 3.Qd4+ Kb1 4.g6 Qf5 5.Kh6 Qe6 6.Qg1+ Kc2 7.Qf2+ Kb1 8.Qd4 Ka2 9.Kg5 Qe7+ 10.Qf6 Qe3+ 11.Qf4 Qg1+ 12.Kf6 Qb6+ 13.Kf7 Qb7+ 14.Ke6 Qc8+ 15.Kf6 Qd8+ 16.Kf5 Qc8+ 17.Kg5 Qc3 18.Qh2+ Ka1 19.Qe2 Kb1 20.Qf2 Qc1+ 21.Kg4 Qc3 22.Qf1+ Kc2 23.Kf5 Qc7 24.Qe2+ Kb1 25.Qd3+ Ka2 26.Qa6+ Kb3 27.Qe6 Ka2 28.Qf7 Qc2+ 29.Ke6 Qe2+ 30.Kxd5 Ka3 31.Qa7+ Kb3 32.Qb6+ Ka3 33.Qd6+ Ka4 34.Qd7+ Ka3 35.g7 Qd1+ 36.Kc6 Qa4+ 37.Kc7 Qa7+ 38.Kd8 Qb8+ 39.Ke7 Qe5+ 40.Kf7 Qf4+ 41.Kg6 Qg3+ 42.Kf6 Qh4+ 43.Ke5 Qg5+ 44.Kd6 Qf4+ 45.Kd5 Qf3+ 46.Kc5 Qc3+ 47.Kb6 Qe3+ 48.Kb7 Qe4+ 49.Kb8 Qe5+ 50.Kc8 Qc5+ 51.Kd8 Qa5+ 52.Ke8 Qh5+ 53.Kf8 Qf3+ 54.Ke7 Qe4+ 55.Qe6 Qh4+ 56.Qf6 Qe4+ 57.Kf8 Qa8+ 58.Kf7 Qd5+ 59.Qe6 Qh5+ 60.Kf8 Qf3+ 61.Ke7 Qb7+ 62.Kf6 Qf3+ 63.Kg5 Qg2+ 64.Qg4 Qd5+ 65.Kh4 Qd8+ 66.Kg3 Qg8 67.Kh3 Qh7+ 68.Kg2 Qg8 69.Kg1 Ka2 70.Qg3 Kb1 71.Qg2 Kc1 72.Qf1+ Kd2 73.Qf8 Qe6 74.g8Q Qe1+ 75.Qf1 Qe3+ 76.Qf2+ Qxf2+

From 1...Qf3+ to 4.g6: 4 moves
From 4.g6 to 35.g7: 31 moves
From 35.g7 to 74.g8Q: 39 moves

After 58.g6:

click for larger view

Mate in 79 moves:

1...Qf5 2.Kh6 Qe6 3.Qg1+ Kc2 4.Qf2+ Kb1 5.Qd4 Ka2 6.Kg5 Qe7+ 7.Qf6 Qe3+ 8.Qf4 Qg1+ 9.Kf6 Qb6+ 10.Kf7 Qb7+ 11.Ke6 Qc8+ 12.Kf6 Qd8+ 13.Kf5 Qc8+ 14.Kg5 Qc3 15.Qh2+ Ka1 16.Qe2 Kb1 17.Qf2 Qc1+ 18.Kg4 Qc3 19.Qf1+ Kc2 20.Kf5 Qc7 21.Qe2+ Kb1 22.Qd3+ Ka2 23.Qa6+ Kb3 24.Qe6 Ka2 25.Qf7 Qc2+ 26.Ke6 Qe2+ 27.Kxd5 Ka3 28.Qa7+ Kb3 29.Qb6+ Ka3 30.Qd6+ Ka4 31.Qd7+ Ka3 32.g7 Qd1+ 33.Kc6 Qa4+ 34.Kc7 Qa7+ 35.Kd8 Qb8+ 36.Ke7 Qe5+ 37.Kf7 Qf4+ 38.Kg6 Qg3+ 39.Kf6 Qh4+ 40.Ke5 Qg5+ 41.Kd6 Qf4+ 42.Kd5 Qf3+ 43.Kc5 Qc3+ 44.Kb6 Qe3+ 45.Kb7 Qe4+ 46.Kb8 Qe5+ 47.Kc8 Qc5+ 48.Kd8 Qa5+ 49.Ke8 Qh5+ 50.Kf8 Qf3+ 51.Ke7 Qe4+ 52.Qe6 Qh4+ 53.Qf6 Qe4+ 54.Kf8 Qa8+ 55.Kf7 Qd5+ 56.Qe6 Qh5+ 57.Kf8 Qf3+ 58.Ke7 Qb7+ 59.Kf6 Qf3+ 60.Kg5 Qg2+ 61.Qg4 Qd5+ 62.Kh4 Qd8+ 63.Kg3 Qg8 64.Kh3 Qh7+ 65.Kg2 Qg8 66.Kg1 Ka2 67.Qg3 Kb1 68.Qg2 Kc1 69.Qf1+ Kd2 70.Qf8 Qe6 71.g8Q Qe1+ 72.Qf1 Qe3+ 73.Qf2+ Qxf2+

From 1...Qf5 to 32.g7: 31 moves
From 32.g7 to 71.g8Q: 39 moves

Oct-07-14  DPLeo: <AylerKupp>, this is an interesting read but has not changed my opinion that GK could not have achieved the tablebase win position without reading our analysis. I'm actually more convinced than before because now I know that he was reading, using, and benefiting from our analysis much earlier than I realized!

His excuse of justifying the cheating because it turned into a "tough professional" game borders on ridiculous. The game was never billed as "ordinary players" against Kasparov and seconds. It was supposed to be Kasparov against The World.

Whatever it takes to help him sleep at night I guess. I know I'll sleep much better knowing it took him nearly 40 moves of reading our analysis along with the help of seconds to compete with us.


Premium Chessgames Member
  AgentRgent: <"I was sitting in my study having just sent it down the line when it suddenly struck me: ' What about Qe6?'"> Revisionist aggrandizement! And almost certainly absolute bollocks based upon other comments he made at the time. His choice of words at the time (as I recall) "I congratulate The World on playing such a bold <and surprising> move."

<"I wouldn't describe it as unfair though; it balanced the struggle."> Justification for what he must have known was unseemly behavior.

<"Everyone was following the suggestions from Irina and her group."> The point he absurdly fails to comprehend is that "her group" consisted of "his opponents". Irina and the vast numbers of contributors WERE the World, whom else should "we" have listened to? Space Aliens?

<"At any rate the game was lost for Black after 55.Qxb4 per the 6-piece Nalimov tablebases"> 51...Ka1 was our draw. After the voting fraud, any opportunity to find further moves that might have drawn was lost as the vast majority of the analysts had quit the game in protest. Had we known at the time that Kasparov had been reading our analysis, we almost certainly would have done so much earlier!

My opinion of Garry is that he is a spineless and dishonorable man who is deserving of nothing but scorn for his behavior. Despite my distaste for the current Russian political situation, I cannot find it in myself to wish Kasparov well in his political endeavors, the failure of which I fully attribute to Karma being a female hound!

FWIW I spoke with Irina about the event just last year. While she was extremely gracious, my impression was that she was disappointed (to say the least) with the way it all turned out.

Oct-07-14  iatelier: <AylerKupp> Kasparov seems to be known as defending his acts, and turning everything upside down. Irespectively of 3 against 3000 that was the point: Kasparov vs. World (which includes other profis) - it wasn't Kasparov vs. Amateurs of the World. And he was wrong to peek into analyses - as imagine Morphy vs. Partners listening their consultations. Morphy would never do that out of his pride.

Thank you so much for the such elaborated reply about the Tablebases. Would it be correct to say that for certain small % of positions they would "disagree", as in the first place all those 'stored' endgame lines were computed at some initial stage, and we see very often that in the World games engines disagree?

Premium Chessgames Member
  AgentRgent: But I'm not bitter... ;)
Oct-08-14  DPLeo: < AgentRgent: But I'm not bitter... ;) >

Me either.     :-)

FWIW your comments seem more accurate than bitter to me. Besides, what's there to be bitter about now that we know he couldn't beat us without cheating. I don't even consider it a loss anymore!

Premium Chessgames Member
  AgentRgent: <DPLeo: I don't even consider it a loss anymore!> Indeed.. I consider the game Drawn as well.
Premium Chessgames Member
  AylerKupp: <<AgentRgent: But I'm not bitter ... ;)>

Of course not. Besides, look at all the fun you had and the opportunity to make history. But I hope that I never hear from you when you <are> bitter! :-)

Premium Chessgames Member
  AylerKupp: <iatelier> I did give you some bad information about what the Syzygy tablebases contain, see <tbentley>'s comment at AylerKupp chessforum. Sorry about that. From reading the available on-line information It's not always clear to me what data the different tablebases contain.

With respect to tablebases disagreeing, tablebase generation works the opposite of engine analysis. Engines start from the position you give them and search <forward>. Along the way they evaluate all the candidate positions according to their evaluation function which differs from engine to engine. They each also prune their search tree differently, Stockfish probably being the most aggressive, so they don't necessarily evaluate the same positions. Finally, search engines, particularly multi-core search engines, are non-deterministic. If you were to run an analysis with the same engine starting from the same position at different times you will get different results. This is apparently because of the interference of higher-priority operating system processes which, by interrupting executing processes or threads, affect the order in which the nodes are evaluated. And the search functions are very sensitive to move ordering, so if the move ordering is different then different branches of the search tree will be pruned according to each engine's search tree pruning algorithms.

Tablebase generators, in contrast, work <backwards> in what is called retrograde analysis (see, for example, They start with K vs. K positions and (presumably!) indicate that all these are drawn. :-) Then they add a pawn in each possible location for each possible K vs. K position and record the result (any promotion results in a win with the side with the pawn); these become the KPvK tablebases. Then they add a pawn to the other side and repeat the process; these become the KPvKP tablebases, etc. until all the possible variations have been considered up to the number of pieces for which the tablebases are generated.

So all the different tablebases for a given number of pieces consider all the positions reachable for those number of pieces. There is no search tree pruning, move ordering considerations, or evaluation function differences. So, subject to the absence of programming errors, they are all correct subject to the different information they contain (DTM, DTZ, etc.) and their compression individual scheme.

Premium Chessgames Member
  AylerKupp: <Interpretation of Stockfish 5 reported evaluations>

Stockfish 5, like all other engines, reports its evaluation of the last node for each line it displays, according to the value specified in the MPV UCI parameter. This evaluation is one of the following:

a. The value reported by its evaluation function in the range [-99.99] to [+99.99] depending (if the proper specification has been made to the chess GUI) whether Black has the advantage (negative numbers) or White has the advantage (positive numbers).

b. The number of moves to mate preceded either by "M" or "#" according to the GUI and "+" or "-" depending on whether White or Black is delivering mate. This mate condition is detected by the normal search plus evaluation process and is not derived from tablebase information.

c. A "special number" in the range < [-100.00] or > [+100.00].

Stockfish 5 uses a value of 10000 internally to represent known win positions and refers to this value as VALUE_KNOWN_WIN. These known win conditions are:

a. KX vs. K where X = "plenty of material"; i.e. KQ vs. K, KR vs. K, and KBN vs. K, provided that it isn't a stalemate position. Once it finds such a position it will return an evaluation of [100.00].

b. KBN vs. K. This is similar to KX vs. K except that the 2 kings must be in close proximity to each other and the defending king must be driven to a corner in order for the attacking side to win. Stockfish calculates evaluation bonuses for both of these conditions and the bonuses get larger the better these conditions are satisfied. The value of these bonuses get added to VALUE_KNOWN_WIN so the evaluation reported by Stockfish will increase for each search ply until the mating condition is achieved.

c. KP vs. K. These endgames are evaluated with the help of an internal bitbase to determine whether the positions are a win for the stronger side or a draw. Like KBN vs. K, bonuses are added to VALUE_KNOWN_WIN,; a material bonus for having an extra pawn (PawnValueEg = 258 for Stockfish 5) and a positional bonus which increases the further the pawn(s) are advanced.

Stockfish 5 uses a value of 32000 internally and refers to this as VALUE_MATE. As far as I can tell Stockfish uses does not adjust VALUE_MATE in any way. But it does scale the evaluation of mate conditions reported to the GUI to conform to both the UCI parameter reporting specification (evaluation reported in equivalent pawns) and formats it for "human readability" (number of moves to mate preceded by "#")

Premium Chessgames Member
  ketchuplover: Go Anand!!!!!!!!!!!
Nov-22-14  eddazeitz: Since I never checked my user profile I was quite surprised by your post Nov-19-14 in "The World vs Naiditsch". But you probably have a point with mentioning the change of my user name from edda zeitz to eddazeitz. Searching my memory I remember that sometime (after buying a new PC) I couldn't enter the kibitzing area and was requested to register anew. There I probably changed the spelling.

(By the way registration seems to be connected to the explorer you use. Out of curiosity I opened Opera (I am using Firefox) and was not allowed to enter the game without new registration.)

Premium Chessgames Member
  AylerKupp: <eddazeitz> (or <edda zeitz>, which one do you prefer?) I had a similar problem when I switched from Internet Explorer to Chrome. I suspect the reason is that our settings (user name, password, preferences, etc.) are kept on our computers as cookies, and that each web browser maintains its own set of cookies. I imported my IE links and cookies into Chrome when I started using the latter but this is a manual operation and they are not sync'd. And, since I still prefer IE over Chrome, I use the former more often and so they quickly get out of sync.

An even more puzzling situation happens when I'm running a chess engines that uses all the cores in my computer. I have it set up so that my chess engines run at low priority, so I can run any other programs without the chess engine interfering. At least most of the time. But sometimes when I try to access <> IE apparently can't find my user name and password and requires me to explicitly log in even though this doesn't happen if the engines are not running.

Computers; you can't live with them and you can't live without them.

And, BTW, I apologize again for thinking that you had not participated in earlier games.

Premium Chessgames Member
  truefriends: Dutch Top OTB players VS Dutch Top CC players:

Already finished:

Still ongoing:

Dec-20-14  isemeria: Hi <AK>,

Few day ago you wrote about <relative> and <absolute> evaluations in the Naiditch game page. It lead me to think about it a little.

I understand that calibrated absolute evaluations help when we compare the results from different engines. But here's the catch: isn't the evaluation function the essential thing that makes the difference in engines playing strenght? Compare to humans of different skill level: I look at a position and think white's slightly better, but Carlsen thinks white is winning.

For example, two engines evaluate some position:
- engine A: +0.75
- engine B: +1.07
The way we use to think is that it's a scaling problem, but perhaps it is a real strenght difference. One of the evaluations is more correct than the other.

But then neither is actually correct, because in addition to the mentioned evaluations, there's one even more absolute evaluation, let's call it <real> evaluation. It has only 3 different values: win, draw, loss. (I know you know this, and this one has more philosophical merit than practical for playing strenght.)

For example, in position where white can capture a knight or a queen, the <real> evalution for both is the same, other things being equal. But of course it's better to take the queen, because it makes the win easier. Nevertheless, after either capture white would be winning just as much.

Because of the existence of <real> evaluation, I don't know if it is possible to somehow define correctness of <absolute> evaluation. It's just an approximation for ordering the positions which are not solvable. You mentioned the relation between evaluation and winning probability. This would be useful of course.

Dec-20-14  isemeria: I'm not familiar with the evaluation functions. But let's assume there are two engines, which have the following evaluation functions.

Engine 1: E = a + b + c + d
where a king safety, b is material, c is central control, and d is space.

Engine 2: E = a + b + c + e
where a, b, c are the same as in Engine 1, but e is backward pawn on open file.

Is it possible to calibrate the numeric evaluation between engines, when the the functions have different terms in them?

Premium Chessgames Member
  AylerKupp: <isemeria> Thanks for taking the time to think about and comment on some of my thoughts. I have some thoughts in return.

I will first say that the evaluation function is not necessarily the essential thing that makes the difference in engines' playing strength. Sure, it's an important factor but other things like search depth are also important. For example, if engine A has a very accurate and detailed evaluation function it will likely be time consuming to calculate, and engine A could only reach a certain search depth in a given amount of time. Engine B might have a simpler and less accurate evaluation function, just an approximation to the <real evaluation>, but reasonably close. As a result, engine B's evaluations take less time to calculate, so engine B can reach deeper search depths in the same amount of time as engine A. Will engine B's deeper search compensate for engine A's more accurate evaluations? Hard to tell without the details, but this has apparently been Stockfish's approach for quite a while, so it seems like a reasonable strategy.

And there are other factors as well. The quality of an engine's search heuristics is (IMO), the most important component of an engine's playing strength. If engine A with its superior evaluation function were combined with the best search heuristics then it would do the best job in eliminating non-productive lines and many unnecessary node evaluations, thus compensating for the additional time required by its more complex (but more accurate) evaluation function. So the engine with the best heuristics would be able to reach the deepest search depths in the same amount of time, even with an evaluation function that is time-consuming to calculate. After all, the fastest evaluation function is the one that does not need to be invoked!

I would then say I think that your concept of <real evaluation> is too restrictive. I think that there is a <real evaluation> while a game is in progress or an analysis is being done (e.g. "White stands better"), we just don't know what it is and how to quantitatively express it. So I view the evaluations by engines, whether <relative> or <absolute>, but particularly the <absolute> evaluations to be approximations of the <real> evaluation. So, you're right, neither <relative> nor <absolute> evaluations are correct, they both have errors in them, namely the difference between their values and the <real> evaluation. The problem, of course, is that the value of the <real> evaluation is unknown.

This led me to a comparison of the difference between the actual value of a physical quantity and measured values of that quantity. All measurement equipment has errors, and measurement noise also needs to be taken into account. There are techniques (e.g. Kalman filters) for combining the results of various measurements to get a better approximation to the actual value of that physical quantity (e.g. a missile's position in space), but these require certain characteristics of the measured quantities like Gaussian-distributed noise that I don't think are applicable in chess position evaluations. And a sequence of chess moves is certainly not stochastic (random) or combinatorial but sequential. I have been doing research in more general filters and estimators to see if I can fit the problem of chess position evaluation into them (or vice versa) but so far without success.

As far as your last question, <Is it possible to calibrate the numeric evaluation between engines, when the functions have different terms in them?> my opinion at the moment is that I don't think so, particularly since we don't know what the terms are in commercial engines! I would mention that not only are the terms themselves unknown but also their relative importance (weights). Two evaluation functions could have the exact terms but different weights associated with them, giving different results. And it's a dynamic problem; I would think that, in general, the accuracy of the evaluation (or at least our <confidence> in the accuracy of the evaluation) increases with the search depth. So that's yet another factor to consider.

Jump to page #    (enter # from 1 to 47)
< Earlier Kibitzing  · PAGE 47 OF 47 ·  Later Kibitzing>

A free online guide presented by
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other users.
  4. Nothing in violation of United States law.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.

NOTE: Keep all discussion on the topic of this page. This forum is for this specific user and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of, its employees, or sponsors.
Participating Grandmasters are Not Allowed Here!

You are not logged in to
If you need an account, register now;
it's quick, anonymous, and free!
If you already have an account, click here to sign-in.

View another user profile:

home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | advertising | contact us
Copyright 2001-2014, Chessgames Services LLC
Web design & database development by 20/20 Technologies