Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing User Profile Chessforum

Member since Dec-31-08 · Last seen Dec-11-18
About Me (in case you care):

Old timer from Fischer, Reshevsky, Spassky, Petrosian, etc. era. Active while in high school and early college, but not much since. Never rated above low 1800s and highly erratic; I would occasionally beat much higher rated players and equally often lose to much lower rated players. Highly entertaining combinatorial style, everybody liked to play me since they were never sure what I was going to do (neither did I!). When facing a stronger player many try to even their chances by steering towards simple positions to be able to see what was going on. My philosophy in those situations was to try to even the chances by complicating the game to the extent that neither I nor the stronger player would be able to see what was going on! Alas, this approach no longer works in the computer age. And, needless to say, my favorite all-time player is Tal.

I also have a computer background and have been following with interest the development in computer chess since the days when computers couldn't always recognize illegal moves and a patzer like me could beat them with ease. Now itís me that canít always recognize illegal moves and any chess program can beat me with ease.

But after about 4 years (a lifetime in computer-related activities) of playing computer-assisted chess, I think I have learned a thing or two about the subject. I have conceitedly defined "AylerKupp's corollary to Murphy's Law" (AKC2ML) as follows:

"If you use your engine to analyze a position to a search depth=N, your opponent's killer move (the move that will refute your entire analysis) will be found at search depth=N+1, regardless of the value you choose for N."

Iím also a food and wine enthusiast. Some of my favorites are German wines (along with French, Italian, US, New Zealand, Australia, Argentina, Spain, ... well, you probably get the idea). One of my early favorites were wines from the Ayler Kupp vineyard in the Saar region, hence my user name. Here is a link to a picture of the village of Ayl with a portion of the Kupp vineyard on the left:

You can send me an e-mail whenever you'd like to aylerkupp

And check out a picture of me with my "partner", Rybka (Aylerkupp / Rybka) from the Masters - Machines Invitational (2011). No, I won't tell you which one is me.


Analysis Tree Spreadsheet (ATSS).

The ATSS is a spreadsheet developed to track the analyses posted by team members in various on-line games (XXXX vs. The World, Team White vs. Team Black, etc.). It is a poor man's database which provides some tools to help organize and find analyses.

I'm in the process of developing a series of tutorials on how to use it and related information. The tutorials are spread all over this forum, so here's a list of the tutorials developed to date and links to them:

Overview: AylerKupp chessforum (kibitz #843)

Minimax algorithm: AylerKupp chessforum (kibitz #861)

Principal Variation: AylerKupp chessforum (kibitz #862)

Finding desired moves: AylerKupp chessforum (kibitz #863)

Average Move Evaluation Calculator (AMEC): AylerKupp chessforum (kibitz #876)


ATSS Analysis Viewer

I added a capability to the Analysis Tree Spreadsheet (ATSS) to display each analysis in PGN-viewer style. You can read a brief summary of its capabilities here AylerKupp chessforum (kibitz #1044) and download a beta version for evaluation.


Ratings Inflation

I have recently become interested in the increase in top player ratings since the mid-1980s and whether this represents a true increase in player strength (and if so, why) or if it is simply a consequence of a larger chess population from which ratings are derived. So I've opened up my forum for discussions on this subject.

I have updated the list that I initially completed in Mar-2013 with the FIDE rating list through 2017 (published in Jan-2018), and you can download the complete data from It is quite large (~ 182 MB) and to open it you will need Excel 2007 or later version or a compatible spreadsheet since several of the later tabs contain more than 65,536 rows.

The spreadsheet also contains several charts and summary information. If you are only interested in that and not the actual rating lists, you can download a much smaller (~ 868 KB) spreadsheet containing the charts and summary information from You can open this file with a pre-Excel 2007 version or a compatible spreadsheet.

FWIW, after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis and it goes something like this:

1. Initially (late 1960s and 1970s) the ratings for the strongest players were fairly constant.

2. In the 1980s the number of rated players began to increase exponentially, and they entered the FIDE-rated chess playing population mostly at the lower rating levels. Also, starting in 1992, FIDE began to periodically lower the rating floor (the lowest rating for which players would be rated by FIDE) from 2200 to the current 1000 in 2012. This resulted in an even greater increase in the number of rated players. And the ratings of those newly-rated players may have been higher than they should have been, given that they were calculated using a high K-factor.

3. The ratings of the stronger of these players increased as a result of playing these weaker players, but their ratings were not sufficiently high to play in tournaments, other than open tournaments, where they would meet middle and high rated players.

4. Eventually they did. The ratings of the middle rated players then increased as a result of beating the lower rated players, and the ratings of the lower rated players then leveled out and even started to decline. You can see this effect in the 'Inflation Charts' tab, "Rating Inflation: Nth Player" chart, for the 1500th to 5000th rated player.

5. Once the middle rated players increased their ratings sufficiently, they began to meet the strongest players. And the cycle repeated itself. The ratings of the middle players began to level out and might now be ready to start a decrease. You can see this effect in the same chart for the 100th to 1000th rated player.

6. The ratings of the strongest players, long stable, began to increase as a result of beating the middle rated players. And, because they are at the top of the food chain, their ratings, at leas initially, continued to climb. I think that they will eventually level out and may have already done that except for possibly the very highest rated players (rated among the top 50) but if this hypothesis is true there is no force to drive them down so they will now stay relatively constant like the pre-1986 10th rated player and the pre-1981 50th rated player. When this leveling out will take place, if it does, and at what level, I have no idea. But a look at the 2017 ratings data indicates that, indeed, it has already started, maybe even among the top 10 rated players.

You can see in the chart that the rating increase, leveling off, and decline first starts with the lowest ranking players, then through the middle ranking players, and finally affects the top ranked players. It's not precise, it's not 100% consistent, but it certainly seems evident. And the process takes decades so it's not easy to see unless you look at all the years and many ranked levels.

Of course, this is just a hypothesis and the chart may look very different 20 years from now. But, at least on the surface, it doesn't sound unreasonable to me.

But looking at the data through 2017 it is even more evident that the era of ratings inflation appears to be over, unless FIDE once more lowers the rating floor and a flood of new and unrated players enters the rating pool. The previous year's trends have either continued or accelerated; the rating for every ranking category, except for possibly the 10th ranked player (a possible trend is unclear), has either flattened out or has started to decline as evidenced by the trendlines.


Chess Engine Non-Determinism

I've discussed chess engine non-determinism many times. If you run an analysis of a position multiple times, with the same engine, the same computer, and to the same search depth, you will get different results. Not MAY, WILL. Guaranteed. Similar results were reported by others.

I had a chance to run a slightly more rigorous test and described the results starting here: US Championship (2017) (kibitz #633). I had 3 different engines (Houdini 4, Komodo 10, and Stockfish 8 analyze the position in W So vs Onischuk, 2017 after 13...Bxd4, a highly complex tactical position. I made 12 runs with each engine; 3 each with threads=1, 2, 3, and 4 on my 32-bit 4-core computer with 4 MB RAM and MPV=3. The results were consistent with each engine:

(a) With threads=1 (using a single core) the results of all 3 engines were deterministic. Each of the 3 engines on each of the analyses selected the same top 3 moves for each engine, with the same evaluations, and obviously the same move rankings.

(b) With threads =2, 3, and 4 (using 2, 3, and 4 cores) none of the engines showed deterministic behavior. Each of the 3 engines on each of the analyses occasionally selected different analyses for the same engine, with different evaluations, and different move rankings.

I've read that the technical reason for the non-deterministic behavior is the high sensitivity of the alpha-beta algorithms that all the top engines use to move ordering in their search tree, and the variation of this move ordering using multi-threaded operation when each of the threads gets interrupted by higher-priority system processes. I have not had the chance to verify this, but there is no disputing the results.

What's the big deal? Well if the same engine gives different results each time it runs, how can you determine what's the real "best" move? Never mind that different engines or relatively equal strength (as determined by their ratings) give different evaluations and move rankings for their top 3 move and that the evaluations may differ as a function of the search depth.

Since I believe in the need to run analyses of a given position using more than one engine and then aggregating the results to try to reach a more accurate assessment of a position, I typically have run sequential analyses of the same position using 4 threads and a hash table = 1,024 MB. But since I typically run 3 engines, I found it to be more efficient to run analyses using all 3 engines concurrently, each with a single thread and a hash table = 256 MB (to prevent swapping to disk). Yes, running with a single thread runs at 1/2 the speed of running with 4 threads but then running the 3 engines sequentially requires 3X the time and running the 3 engines concurrently requires only 2X the time for a 50% reduction in the time to run all 3 analyses to the same depth, and resolving the non-determinism issues.

So, if you typically run analyses of the same position with 3 engines, consider running them concurrently with threads=1 rather than sequentially with threads=4. You'll get deterministic results in less total time.


Any comments, suggestions, criticisms, etc. are both welcomed and encouraged.

------------------- Full Member

   AylerKupp has kibitzed 11848 times to chessgames   [more...]
   Dec-07-18 Carlsen - Caruana World Championship Match (2018) (replies)
AylerKupp: <<Tiggler> Sure, if what you are trying to do is calculate the TPR, but that was not the context.> I'm surprised that you don't realize that the appropriateness of the use of the Normal distribution is unrelated to the context. Whether you are calculating ratings, ...
   Dec-04-18 Tiggler chessforum (replies)
   Dec-04-18 AylerKupp chessforum (replies)
AylerKupp: <<Tiggler> Read your own post above. Perhaps that will refresh your memory.> See what I mean? I obviously (well, maybe not obviously) meant to say SQRT(2) = 10/7 = 1.428... and instead I visualized 7/10 = 0.7. Again, no excuse, just an irrelevant explanation.
   Dec-04-18 Windhorst vs K Shoup, 1985
AylerKupp: <<scutigera> ... I have heard others use it more often than "almost never". > Well, hardly ever.
   Nov-29-18 Caruana vs Carlsen, 2018 (replies)
AylerKupp: <<Marmot PFL> Nakamura could not understand why 7 Bg5 was never tried in any game.> I'm surprised. 7.Bg5 is by far a more common move than 7.Nd5, more than 10X as often played ( Opening Explorer ), and almost all of Carlsen's recent top-level opponents have played ...
   Nov-28-18 Carlsen vs Caruana, 2018 (replies)
AylerKupp: <<Pedro Fernandez> Say $54,000 tax + $36,000 Hotel = $90,000, is a lot of money! I don't know whether U.S. Chess Federation also request a piece of this cake.> Well, he won 450,000 euros ~ US $ 512,000 at today's exchange rate. So US $ 512,000 - $ US 90,000 = US $ ...
   Nov-28-18 Carlsen vs Caruana, 2018 (replies)
AylerKupp: FWIW, it was a theoretical tablebase win for White (mate in 47 moves) after 41.Rxh5.
   Nov-26-18 Caruana vs Carlsen, 2018 (replies)
AylerKupp: <<DPLeo>Heh heh, that may be a little insulting to some 7-year olds.> Perhaps. To all those 7-year olds out there that can whip me (and, sadly, I'm sure that there are many of them) I apologize !, I Apologize !! , I APPOLOGIZE !!! Please don't beat me up. <I think ...
(replies) indicates a reply to the comment.

De Gustibus Non Disputandum Est

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 55 OF 55 ·  Later Kibitzing>
Premium Chessgames Member
  AylerKupp: <2018 Candidates Tournament Simulation> (part 4 of 4)

<3. Determine the winner of one tournament by simulation>

For each player / player, White / Black combination, calculate a random number (RN) between 0 and 1. Score each game as follows:

a. White player wins if RN <= Win Range(2)

b. White player draws if RN > Draw Range(1), RN <= Draw Range(2)

c. White player loses if RN > Loss Range(1) (i.e. otherwise)

<Example> For Mamedyarov vs. Karjakin, Mamedyarov playing White:

a. A win for Mamedyarov if the value is <= to 0.303503

b. A draw for Mamedyarov if the value is > 0.303503, < 0.874932)

c. A loss for Mamedyarov if the value is > 0.874932

And for Mamedyarov vs. Karjakin, Karjakin playing White:

a. A win for Karjakin if the value is <= 0.174310

b. A draw for Karjakin if the value is > 0.174310, <=0.428571

c. A loss for Karjakin if the value is > 0.428571

The winner of the tournament, of course, is the player with the highest score. If two or more players have the same score, consider that each player won the tournament.

<4. Determine the tournament win probabilities>

Run a simulation of as many tournaments as desired, or as required to establish statistical significance. Determine for each players their p(Tournament Win) as the ratio between the number of tournament wins by that player and the total number of tournament wins by all players. Note that the total number of tournament wins by all players will likely be greater than the number of simulations run because more than one player might tie for first place in any simulated tournament.

Hopefully all this makes sense to you.

Premium Chessgames Member
  AylerKupp: <Lambda> I don't know how you arrived at your assumptions of a draw percentage = 4/7 ~ 57.14% or your assumed White advantage = 35 Elo points. But in case you're interested here's some data that validates your assumptions using the ChessTempo database.

The ChessTempo database ( currently contains over 1.7 million chess games of all types (Classic time control, Blitz, rapid, etc.). One thing that makes it useful is that you can easily filter it to consider only games where both players were rated 2200+, 2300+, ..., 2700+. The latter is particularly useful for determining % Win, % Draw, and % Loss for the White player in a super strong tournament like the 2018 Candidates.

The last time I looked at the database in detail was in May-2017. At that time it had 14,502 games of all types where both players were rated 2700+. I filtered the database to exclude games played in events that had Blitz, Rapid, Exhib(ition), Blind(fold), and Simul(taneous) in their title, and the remaining 8,879 games I assumed to have been played at Classic time controls. Not perfect, but probably close.

Of these 8,879 games all were played since 2000 so they are probably all relevant. White won 26.23% of the games, lost 15.73% of the games, and 58.04% of the games were drawn. Clearly the 58.04% is very close to your 57.14% assumption for a draw percentage.

The percentage of White wins + the percentage of Black wins was 41.96%. Calculating the White advantage as White Win % - 1/2 * (White + Black win % ) = 26.33% - 20.92% = 5.25% or a White p(Win) of 0.552. This corresponds to a rating differential of +37 Elo rating points. Again, very close to your assumed +35 Elo rating points.

Just thought that you might be interested to know.

Premium Chessgames Member
  Lambda: My simulation tool is written in Python, and it takes slightly over a minute to run a million simulations. (Less time once the tournament starts and some of the game results are already determined.)

I haven't attempted to define "statistical validity", but the results from a million trials don't tend to change from run to run by more than 0.1%.

I have no insights about what a good way to use Excel to do this because I've never used Excel in my life, and indeed I've never willingly used any tool from any "office suite" in my life. Markup languages for text formatting, and programming languages for data processing is my attitude.

But other than what you need to do to work around the limitations of your tool, that sounds about right to me. I haven't checked your details, but in approach, the only obvious differences I have are that:

My "draw area" is the first four sevenths of my random number, so I can generate the random number, immediately check whether it's a draw, and if it is, I don't have to do any further calculations for that game, for efficiency, and

At the end, I check for ties and try applying all the tie-breaks to my cross table to get the one winner, and if they're all tied too, call a tie a ninth result.

Premium Chessgames Member
  AylerKupp: <Lambda> Thanks for responding. I use Excel mainly for entering the parameters (players, ratings, draw %, etc.) so that they are easily visible and changeable. The bulk of the work I do using Visual Basic for Applications (VBE) mainly because I'm familiar with it, and I start the simulation by invoking a macro. I've been trying to learn Python off and on for years because I find it elegant, but I've never gotten it to work on my computer for unknown reasons. Then again I haven't tried very hard.

I'm encouraged that no only do you think (at least at first glance) that my concept is reasonable (or, at least, not unreasonable) and that it only takes a little over a minute to run a million simulations. Since both Python and VBE are interpreted, VBE should be able to run in the ballpark, although I'm sure that Python is much more efficient. After all, VBE is a Microsoft product.

The reason I mentioned statistical significance testing was that, if it took 1 million simulations a long time to run, then the number of simulations probably could have been reduced to save time. But, if it only takes slightly over a minute (or two, or five), then it's not an issue.

Yes, for efficiency, I was going to check for a draw first since that's the statistically most likely result; it's just second nature to me after many years of writing real-time software. But since that was not relevant to the concept, I didn't bother to mention it. At any rate, given the small amount of time that it takes to run 1 million simulations, this type of efficiency is probably not significant.

I had not considered using the tie-breaks to get the one winner; thanks for the tip. It makes more sense. But, reviewing the tie-break rules for this tournament, after the first 3 tie-breakers the players need to first play 2 rapid games, then up to 4 blitz games, and then a sudden death game. To be more "accurate", I would think that additional simulations would have to be run with the player's rapid and blitz ratings used, and I doubt that there are any ratings for sudden death games. So that's more work. Then again, the likelihood that there would be a tie after using Sonnenborn-Berger is miniscule, so bothering to implement rapid, blitz, and sudden death tie-break simulations is probably not worth the effort.

FWIW, I like the tie-break sequence in the Candidates and I wish more tournaments would use it. It encourages trying for wins just in case and it's based primarily on the results obtained by the players in the actual tournament based on games at classic time controls. So all the arguments for and against using rapid and blitz time controls to determine the results of a tournament played at classic time controls can be avoided for the most part.

Mar-31-18  qqdos: <Dear AK> would you like to take a quick look at the invitation at Bobby's flawed Gem vs Geller [B89]. kind regards.
May-11-18  yskid: I've just posted on "Naiditch game" site ;
12.a3 line played in the correspondence championship game
May-18-18  djvanscoy: <AylerKupp> "It made me think of a recent book I was reading about linear algebra where they were characterizing sparse matrices as to whether they had block regions, regions of the matrix which had a lot of non-zero elements in a few localized and adjacent rows and columns but typically only non-zero elements in the rest of the rows and columns."

I'm guessing you meant to say, "...typically only zero entries in the rest of the rows and columns"? In other words, some block is dense but the rest of the matrix is sparse?

"But I have no doubt that if the top players from other eras; the Capablancas, Alekhines, Fischers, Spasskys, etc. were somehow transported into the current time and given adequate time and exposure to current chess analysis tools that they would be able to hold their own against today's best players."

I agree with you, and indeed I couldn't help but think that Carlsen's rook-and-pawn endgame blunder on move 54 of his game against Caruana in the first round of the 2018 GRENKE tournament (Caruana vs Carlsen, 2018) would not have been made by Capablanca. But maybe in this case I'm afflicted with a bit of hero-worship.

Premium Chessgames Member
  AylerKupp: <<FSR> You're right - 13-12! I can't imagine that there are many tournaments, at whatever time control, where Black wins more often than White.>

Thanks for the link to your fine article. Itís good to see that others recognize that the percentage of draws increases as the rating of the players increases. Which is not surprising given that itís generally (not unanimously) accepted that in order for one player to win a game the other player must make at least one mistake or a series of inaccuracies. So, since the higher rated the player the better he generally is, itís not surprising that the higher rated the players the less the likelihood that one of them will make a mistake. Hence, the greater the percentage of draws.

One good way to see this is to look at the database. In addition to listing the win/lose/draw result percentages for all the games in its database, it gives you the ability to filter the games according to the rating of the 2 players; 2200+ (both players rated higher than 2200), 2300+ (both players rated higher than 2300), etc.

So here are the current (todayís) database snapshot from Whiteís perspective:

Rating # Games Win % Draw % Loss %

All 3,459,235 38.4% 31.4% 30.2%

2200+ 1,712,350 35.1% 39.5% 25.5%

2300+ 1,176,981 33.5% 43.0% 23.4%

2400+ 692,046 31.9% 46.8% 21.3%

2500+ 266,553 30.0% 50.9% 19.1%

2600+ 73,269 29.4% 51.9% 18.6%

2700+ 16,510 28.7% 52.2% 19.1%

Clearly the number of draws increases as the ratings of the players increases. The percentages, however, are somewhat ďcontaminatedĒ since the database includes games at classic, rapid, and blitz time control as well as blindfold, exhibition, etc. And itís not easy to filter the various categories other by looking at the names of the events, and those are not always sufficiently descriptive.

Sep-05-18  SChesshevsky: I've been watching TCEC season 13 and have a couple of questions that only someone very knowledgeable about computer chess can probably answer:

When a time limit for their games decreases, say 90 min down to 30 min, are the programs adjusted to save time? Do they try to keep the same depth but maybe limit the number of variations or maybe keep the number of variations the same but not look as deep?

Who and why decided that some computer games/divisions go with 90 minutes plus? Do computers really need that much time to produce their 3000 elo play? It's just about unbearable to try to watch those 90 min games and it seems a waste if 60 min or less produces relatively same game.

Thanks for any info.

Premium Chessgames Member
  AylerKupp: <<SChesshevsky> I've been watching TCEC season 13 and have a couple of questions that only someone very knowledgeable about computer chess can probably answer>

OK, until you find such a person I'll give it a try.

1. <When a time limit for their games decreases, say 90 min down to 30 min, are the programs adjusted to save time? >

Time management is an important feature in chess engines and differs between engines. But, yes, the time that the engine uses to determine what move to play varies from move to move depending on many factors. I would think that prior to starting the analysis of its move the engine will calculate the average time that it should take for its analysis taking into account the time remaining before the time control and the number of moves that it has to make. Then it can extend or shorten the time it spends calculating its next move depending on the circumstances.

For example, if an engine finds itself in check and it has only one legal move then they <may> recognize that there is no point in taking the time to search deeply, and it can respond immediately. It would then have more time for subsequent moves.

There might be other factors that the engine considers. If the engine is keeping track of the complexity of the position, however it estimates that, then it might take longer (or less) to calculate playing it's next move. And if the engine is calculating a series of forcing moves when its "time to move" timer goes off, it will probably extend its time limit until it reaches a quiescent position. Likewise, if it's probing a tablebase it will probably wait until the result of the position is determined.

Also, if Ponder=ON then the engine can use their opponent's thinking/calculating time to determine what their opponent's best replies will be and, if their opponent's next move is one of those replies, then assuming that it has calculated what it considers to be sufficiently deeply to have reasonable confidence in its evaluation, it knows what its best reply to that reply will be and will make its next move fairly quickly. If the opponent's move is not one of its analyzed replies then it has to begin the analysis from the beginning and it will take longer to move.

From personal experience I found that Rybka had a tendency to get into time trouble. I was watching a game between Rybka and Houdini where Rybka took a long time in its early moves. Then, as the time control approached, it spent less time per move and could only reach lower and lower search depths as time went on. As a result the quality of its play decreased dramatically and, from an equal position, Houdini began to outplay it and won the game.

And engines do change the number of variations they check according to the move number. When a game first starts the number of possible variations is relatively small and the engine looks at all possible moves. Soon the number of possible moves grows exponentially and so the engine starts to prune its search tree to set a limit to the number of variations it will examine. And the amount of search tree pruning varies from engine to engine and some engine even allow you to specify the amount of pruning that you are willing to tolerate.

2. <Who and why decided that some computer games/divisions go with 90 minutes plus?>

I would assume that those who set up the engine vs. engine tournament determine the time controls and it would to some extent depend on the number of players. In a knockout type tournament like TCEC with a large number of players in the early rounds faster time controls would likely be used to reduce the time required to play the round and with a smaller number of players in the later rounds the slower time controls could be used and still finish the round in a reasonable amount of time.

3. <Do computers really need that much time to produce their 3000 elo play?>

A good question to which I don't have an answer. Some researchers have indicated that an engine's playing strength increases as a factor of its search depth but the increase is not linear and eventually the engine reaches the point of diminishing returns and additional search depth does not substantially increase the engine's playing strength. And, of course, the point at which diminishing returns occur varies from engine to engine as well as the complexity of the position.

Premium Chessgames Member
  diceman: If you look at the Fischer/Spassky 1972
page, it appears one with a better computer has determined Game1 is drawn after Bxh2. Fischer's losing mistake
was at move ...39.

I seem to remember you thinking it was a win???

I tried to go back and look at your posts but found it difficult to follow.

He's also done much work on many of the other games. Showing even the drawn games to be double edged and exciting.

Very interesting.

Premium Chessgames Member
  diceman: Oh yeah, he also found the link
to Fischer/Spassky clock times.

I believe he posted it in the Game 13 thread.

Premium Chessgames Member
  AylerKupp: <<diceman> I seem to remember you thinking it was a win??> - Yes

<I tried to go back and look at your posts but found it difficult to follow> - Yes

Premium Chessgames Member
  Tiggler: AylerKupp: I have a question that I'd like to address to you because you understand, as well as anyone, engine chess and also ratings.

We all know that engines are stronger than humans, when playing with the same time control. Is there a time control at which they are equal, if we allow the human more time than the engine? That would mean that the engine would have to have "ponder" off, so that it cannot use the human player's time.

Concerning ratings, engine rating are higher than human ratings by about 300-400 points. Some, a few, of us know that that difference is meaningless, because the ratings are based on an entirely separate pool of games and players. Just as Carlsen's 2939 blitz rating does not mean that his chess improves by 100 points when he plays fast. So here is the question: what time control is used when engine ratings are established?

I suspect that engines play worse chess in their world championships, because of fast time controls, than humans play in theirs, at classical time controls.

Evidence: humans draw more often in their championships than engines do in theirs. Basis: decided games are evidence of weak play.

Premium Chessgames Member
  diceman: <Tiggler:

Is there a time control at which they are equal>

If you have a very fast time control for the computer, you handicap its ability to calculate, making it an easier opponent.

"Equal" would depend on the human players strength, since there isn't a set human level.

<Basis: decided games are evidence of weak play.>

First it would depend if the humans are
actually playing to win. They can select drawish openings, play drawish variations. They get to choose when they go for a win. They get to adjust play based on their position and standing.

Computers only play their best move,
and only their best move. They lose
because they can only see so far.
(their horizon)
As the game moves forward, they see
their evaluation slip, but it's too late.

There is also the efficiency of the software.
With the same computer, and the same "thinking"
time, different programs will reach different

I'm not an expert, but I'm sure if I'm wrong
Ayler will yell at me. :)

Premium Chessgames Member
  Tiggler: <diceman>: <If you have a very fast time control for the computer, you handicap its ability to calculate, making it an easier opponent.>

The same, of coarse, is true for a human. I had assumed that this was so obvious that to say so would be superfluous.

<Computers only play their best move, and only their best move.>

Computers also have a knob to twiddle (I forget what it is called), to adjust their level of aggressiveness. Oh, I remembered, it's called "contempt".

Premium Chessgames Member
  diceman: <Tiggler: <diceman>: <If you have a very fast time control for the computer, you handicap its ability to calculate, making it an easier opponent.>

The same, of coarse, is true for a human.>

Are you talking about faster speeds for both?

I was thinking of just handicapping the computer side, the human would get to play "normal" chess.
We're basically trying to blind it so it
becomes an easier opponent.

<Computers also have a knob to twiddle>

I wonder how much testing they actually do on those things?

Premium Chessgames Member
  Tiggler: <AylerKupp> I think maybe I owe you an apology for some irritable and disrespectful posts on the WC forum. You are a respected and usually trusted poster, without doubt. But, I wish to say, with that comes a responsibility to be right, and it upsets me when you post erroneous info.

I hope we're all good.

Premium Chessgames Member
  Tiggler: Read your own post above:

AylerKupp chessforum

Perhaps that will refresh your memory.

Premium Chessgames Member
  Tiggler: My post of 3-23-2016, to which you refer, is here:

Tiggler chessforum

Premium Chessgames Member
  Tiggler: Solution of the 4-game rapid match problem is given in a new post here:

Tiggler chessforum

Premium Chessgames Member
  AylerKupp: <<Tiggler> So here is the question: what time control is used when engine ratings are established?>

It depends. Unlike FIDE there is no one organization that establishes engine ratings. So the ratings depends on the organization conducting the engine vs. engine tournaments and the hardware that they are using. These organizations also conduct engine tournaments at different time controls so, like the different ratings for humans in Rapid and Blitz, each engine has different ratings at different time controls. And the ratings will also depend on the engine and GUI settings.

The two sites I prefer are CCRL ( and CEGT (, primarily because (1) they seem to have the most engines in their tournaments, (2) they are updated at least monthly, and (3) they provide ratings for the same engine in 4-core and 1-core configurations. But there are several others such as IPON which sets Ponder=ON (the engine waiting for its opponent to move can use the waiting time to analyze it's opponent's most likely responses).

CCRL conducts its tournaments at 40/4 and 40/40 time controls and CEGT conducts its tournaments at 40/4, 40/20, and 40/120 time controls plus 5 min/3 sec increments, so their ratings are not directly comparable. And they use different hardware (and sometimes different testers in the same site use different hardware) so the ratings between the two sites are not directly comparable even if the tournaments are conducted using the same time controls. But all of that is explained on their sites.

For comparison the last TCEC tournament ( was conducted at different time controls, with the fastest being 30 mins + 10 sec increment per move for the whole game for divisions 2-4 and 60 mins + 10 sec increment per move for the whole game for division 1 (the strongest). Then the time was increased depending on the level reached by the engines with the slowest time control being 120 mins + 15 sec increment per move for the whole game. And the games were played on a 44 (!) core computer, much more capable than the 1-core to 4-core computers used in the CCRL and CEGT sites. So, needless to say, the engine ratings between the TCEC and CCRL/CEGT sites are also not directly comparable.

Sorry for my usual verbose response but if that' is good enough for you then you won't need to visit the individual pages. It reminds me of the old joke about one guy showing off his new watch to his friend. The guy says: "This is my new Albert Einstein watch. It not only tells you what time it is but why."

Premium Chessgames Member
  AylerKupp: <<Tiggler> But, I wish to say, with that comes a responsibility to be right, and it upsets me when you post erroneous info. I hope we're all good.>

If you think that it upsets you when I post some erroneous information I can assure you that's nowhere as much as it upsets me when I post some erroneous information. Usually it's the result of rushing, carelessness, and not paying enough attention. It's also due to a mild dyslexia and my eyes not being as good as they used to be so when looking at numbers in a (row, column) intersection I sometimes "see" the wrong row and/or column.

And those are just explanations, they are not intended as excuses. There aren't any. Other than the state of my eyes I simply have to be more careful and slow down somewhat. And after the fact all I can do is admit that I was wrong and apologize for my mistakes as soon as they are pointed out by anyone. Hopefully I've done that most of the time.

Which, of course, doesn't necessarily mean that I will always agree with the "correction". In that case I try to simply indicate why I disagree with the "correction" and say that we'll just have to agree to disagree or something equally trite and then move on.

As far as being good, sure. We were never "not good" as far as I'm concerned.

Premium Chessgames Member
  AylerKupp: <<Tiggler> Read your own post above. Perhaps that will refresh your memory.>

See what I mean? I obviously (well, maybe not obviously) meant to say SQRT(2) = 10/7 = 1.428... and instead I visualized 7/10 = 0.7. Again, no excuse, just an irrelevant explanation.

Premium Chessgames Member
  Tiggler: <As far as being good, sure. We were never "not good" as far as I'm concerned.> Nor from my point of view, and I am happy to read your response. I don't think either of us is inclined to waste our energy railing against the nonsense posted by those who never have shown a sign of knowing better, but if you find me posting mistakes of fact, not opinion, I expect you to take me to task.
Jump to page #    (enter # from 1 to 55)
search thread:   
< Earlier Kibitzing  · PAGE 55 OF 55 ·  Later Kibitzing>

Bobby Fischer Tribute Shirt
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No posting personal information of members.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.

NOTE: Keep all discussion on the topic of this page. This forum is for this specific user and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of, its employees, or sponsors.
Participating Grandmasters are Not Allowed Here!

You are not logged in to
If you need an account, register now;
it's quick, anonymous, and free!
If you already have an account, click here to sign-in.

View another user profile:

home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | contact us
Copyright 2001-2018, Chessgames Services LLC