Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing User Profile Chessforum

Member since Dec-31-08 · Last seen Aug-18-18
About Me (in case you care):

Old timer from Fischer, Reshevsky, Spassky, Petrosian, etc. era. Active while in high school and early college, but not much since. Never rated above low 1800s and highly erratic; I would occasionally beat much higher rated players and equally often lose to much lower rated players. Highly entertaining combinatorial style, everybody liked to play me since they were never sure what I was going to do (neither did I!). When facing a stronger player many try to even their chances by steering towards simple positions to be able to see what was going on. My philosophy in those situations was to try to even the chances by complicating the game to the extent that neither I nor the stronger player would be able to see what was going on! Alas, this approach no longer works in the computer age. And, needless to say, my favorite all-time player is Tal.

I also have a computer background and have been following with interest the development in computer chess since the days when computers couldn't always recognize illegal moves and a patzer like me could beat them with ease. Now itís me that canít always recognize illegal moves and any chess program can beat me with ease.

But after about 4 years (a lifetime in computer-related activities) of playing computer-assisted chess, I think I have learned a thing or two about the subject. I have conceitedly defined "AylerKupp's corollary to Murphy's Law" (AKC2ML) as follows:

"If you use your engine to analyze a position to a search depth=N, your opponent's killer move (the move that will refute your entire analysis) will be found at search depth=N+1, regardless of the value you choose for N."

Iím also a food and wine enthusiast. Some of my favorites are German wines (along with French, Italian, US, New Zealand, Australia, Argentina, Spain, ... well, you probably get the idea). One of my early favorites were wines from the Ayler Kupp vineyard in the Saar region, hence my user name. Here is a link to a picture of the village of Ayl with a portion of the Kupp vineyard on the left:

You can send me an e-mail whenever you'd like to aylerkupp

And check out a picture of me with my "partner", Rybka (Aylerkupp / Rybka) from the Masters - Machines Invitational (2011). No, I won't tell you which one is me.


Analysis Tree Spreadsheet (ATSS).

The ATSS is a spreadsheet developed to track the analyses posted by team members in various on-line games (XXXX vs. The World, Team White vs. Team Black, etc.). It is a poor man's database which provides some tools to help organize and find analyses.

I'm in the process of developing a series of tutorials on how to use it and related information. The tutorials are spread all over this forum, so here's a list of the tutorials developed to date and links to them:

Overview: AylerKupp chessforum (kibitz #843)

Minimax algorithm: AylerKupp chessforum (kibitz #861)

Principal Variation: AylerKupp chessforum (kibitz #862)

Finding desired moves: AylerKupp chessforum (kibitz #863)

Average Move Evaluation Calculator (AMEC): AylerKupp chessforum (kibitz #876)


ATSS Analysis Viewer

I added a capability to the Analysis Tree Spreadsheet (ATSS) to display each analysis in PGN-viewer style. You can read a brief summary of its capabilities here AylerKupp chessforum (kibitz #1044) and download a beta version for evaluation.


Ratings Inflation

I have recently become interested in the increase in top player ratings since the mid-1980s and whether this represents a true increase in player strength (and if so, why) or if it is simply a consequence of a larger chess population from which ratings are derived. So I've opened up my forum for discussions on this subject.

I have updated the list that I initially completed in Mar-2013 with the FIDE rating list through 2017 (published in Jan-2018), and you can download the complete data from It is quite large (~ 182 MB) and to open it you will need Excel 2007 or later version or a compatible spreadsheet since several of the later tabs contain more than 65,536 rows.

The spreadsheet also contains several charts and summary information. If you are only interested in that and not the actual rating lists, you can download a much smaller (~ 868 KB) spreadsheet containing the charts and summary information from You can open this file with a pre-Excel 2007 version or a compatible spreadsheet.

FWIW, after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis and it goes something like this:

1. Initially (late 1960s and 1970s) the ratings for the strongest players were fairly constant.

2. In the 1980s the number of rated players began to increase exponentially, and they entered the FIDE-rated chess playing population mostly at the lower rating levels. Also, starting in 1992, FIDE began to periodically lower the rating floor (the lowest rating for which players would be rated by FIDE) from 2200 to the current 1000 in 2012. This resulted in an even greater increase in the number of rated players. And the ratings of those newly-rated players may have been higher than they should have been, given that they were calculated using a high K-factor.

3. The ratings of the stronger of these players increased as a result of playing these weaker players, but their ratings were not sufficiently high to play in tournaments, other than open tournaments, where they would meet middle and high rated players.

4. Eventually they did. The ratings of the middle rated players then increased as a result of beating the lower rated players, and the ratings of the lower rated players then leveled out and even started to decline. You can see this effect in the 'Inflation Charts' tab, "Rating Inflation: Nth Player" chart, for the 1500th to 5000th rated player.

5. Once the middle rated players increased their ratings sufficiently, they began to meet the strongest players. And the cycle repeated itself. The ratings of the middle players began to level out and might now be ready to start a decrease. You can see this effect in the same chart for the 100th to 1000th rated player.

6. The ratings of the strongest players, long stable, began to increase as a result of beating the middle rated players. And, because they are at the top of the food chain, their ratings, at leas initially, continued to climb. I think that they will eventually level out and may have already done that except for possibly the very highest rated players (rated among the top 50) but if this hypothesis is true there is no force to drive them down so they will now stay relatively constant like the pre-1986 10th rated player and the pre-1981 50th rated player. When this leveling out will take place, if it does, and at what level, I have no idea. But a look at the 2017 ratings data indicates that, indeed, it has already started, maybe even among the top 10 rated players.

You can see in the chart that the rating increase, leveling off, and decline first starts with the lowest ranking players, then through the middle ranking players, and finally affects the top ranked players. It's not precise, it's not 100% consistent, but it certainly seems evident. And the process takes decades so it's not easy to see unless you look at all the years and many ranked levels.

Of course, this is just a hypothesis and the chart may look very different 20 years from now. But, at least on the surface, it doesn't sound unreasonable to me.

But looking at the data through 2017 it is even more evident that the era of ratings inflation appears to be over, unless FIDE once more lowers the rating floor and a flood of new and unrated players enters the rating pool. The previous year's trends have either continued or accelerated; the rating for every ranking category, except for possibly the 10th ranked player (a possible trend is unclear), has either flattened out or has started to decline as evidenced by the trendlines.


Chess Engine Non-Determinism

I've discussed chess engine non-determinism many times. If you run an analysis of a position multiple times, with the same engine, the same computer, and to the same search depth, you will get different results. Not MAY, WILL. Guaranteed. Similar results were reported by others.

I had a chance to run a slightly more rigorous test and described the results starting here: US Championship (2017) (kibitz #633). I had 3 different engines (Houdini 4, Komodo 10, and Stockfish 8 analyze the position in W So vs Onischuk, 2017 after 13...Bxd4, a highly complex tactical position. I made 12 runs with each engine; 3 each with threads=1, 2, 3, and 4 on my 32-bit 4-core computer with 4 MB RAM and MPV=3. The results were consistent with each engine:

(a) With threads=1 (using a single core) the results of all 3 engines were deterministic. Each of the 3 engines on each of the analyses selected the same top 3 moves for each engine, with the same evaluations, and obviously the same move rankings.

(b) With threads =2, 3, and 4 (using 2, 3, and 4 cores) none of the engines showed deterministic behavior. Each of the 3 engines on each of the analyses occasionally selected different analyses for the same engine, with different evaluations, and different move rankings.

I've read that the technical reason for the non-deterministic behavior is the high sensitivity of the alpha-beta algorithms that all the top engines use to move ordering in their search tree, and the variation of this move ordering using multi-threaded operation when each of the threads gets interrupted by higher-priority system processes. I have not had the chance to verify this, but there is no disputing the results.

What's the big deal? Well if the same engine gives different results each time it runs, how can you determine what's the real "best" move? Never mind that different engines or relatively equal strength (as determined by their ratings) give different evaluations and move rankings for their top 3 move and that the evaluations may differ as a function of the search depth.

Since I believe in the need to run analyses of a given position using more than one engine and then aggregating the results to try to reach a more accurate assessment of a position, I typically have run sequential analyses of the same position using 4 threads and a hash table = 1,024 MB. But since I typically run 3 engines, I found it to be more efficient to run analyses using all 3 engines concurrently, each with a single thread and a hash table = 256 MB (to prevent swapping to disk). Yes, running with a single thread runs at 1/2 the speed of running with 4 threads but then running the 3 engines sequentially requires 3X the time and running the 3 engines concurrently requires only 2X the time for a 50% reduction in the time to run all 3 analyses to the same depth, and resolving the non-determinism issues.

So, if you typically run analyses of the same position with 3 engines, consider running them concurrently with threads=1 rather than sequentially with threads=4. You'll get deterministic results in less total time.


Any comments, suggestions, criticisms, etc. are both welcomed and encouraged.

------------------- Full Member

   AylerKupp has kibitzed 11303 times to chessgames   [more...]
   Aug-16-18 Biel (2018) (replies)
AylerKupp: <offramp> Thank you for the notice. I have set my cell phone to send me an alert that I have strayed into your top 10 so that I can accelerate my enjoyment of life. Alas, I'm afraid I might have already exceeded my alloted 75%. But that's not too bad. One life down, 8 more ...
   Aug-16-18 D Horseman vs Larsen, 1957 (replies)
AylerKupp: <Troller> I could have been worse. I heard of a game in which a player, in a winning position, got up to take a walk while his opponent deliberated. When he got back to the table not only had his opponent made his move but had removed one of the player's pieces so that the ...
   Aug-16-18 Kamsky vs Karpov, 1993
AylerKupp: <Troller> Thanks, I wasn't aware of that game. And having Pirc call it a novelty was hilarious. I wonder how many other "novelties" are the result of a player making a move other than the one he intended to make!
   Aug-14-18 Biographer Bistro (replies)
AylerKupp: <<ChessHigherCat> That said, I hope it's not impossible to maintain the site without the source code. > I hope it's not impossible but it is certainly unrealistic and would be an exercise in masochism. Besides, it's unlikely that the source code is not around somewhere
   Aug-13-18 Hans Kmoch
AylerKupp: <<Sally Simpson> Yes, at this point I would say that it's humans:1, computers:0. But, frankly, Stockfish exceeded my expectations because I didn't think that it could find the drawing line at all, much less in a reasonable amount of time (if you consider 2 hours and 42 ...
   Aug-12-18 Mikhail Tal (replies)
AylerKupp: <<Charlie Durman> Looks like Misha's page is imploding with Wankas ... How sad am I to see this > And yet in spite of your sadness you decided to contribute to the implosion.
   Aug-11-18 Fischer vs Petrosian, 1971
AylerKupp: <<beatgiant> May I request that the giant off-topic discussion about computers move to a computer-related page such as Komodo (Computer)?> A reasonable request and I will abide by it. However, since the discussion has evolved towards the study composed by W.E. Rudolph ...
   Aug-06-18 First Piatigorsky Cup (1963) (replies)
AylerKupp: <Sally Simpson> My first car was a 1959 Rambler. Nice to know that Petrosian, Keres, and I had something else in common besides our love of chess. However, they should have spend the same amount of time learning about American cars as they did learning about openings.
   Aug-05-18 chessforum (replies)
AylerKupp: <Big Pawn> I've got to do what I've got to do.> There is already a process in place for reporting users who violate <>'s posting guidelines. On the right bottom of every page there is a list of their posting guidelines and a link ( Blow the Whistle ...
   Aug-05-18 Caruana vs Anand, 2013 (replies)
AylerKupp: <john barleycorn> Is that an "infinite loop" or a "loop run through infinitely"? or are they the same thing?> "Infinite loop" is the usual term. I've never heard the term "loop run through infinity" but I would suspect they mean the same thing. For instance, I have a book
(replies) indicates a reply to the comment.

De Gustibus Non Disputandum Est

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 54 OF 54 ·  Later Kibitzing>
Premium Chessgames Member
  visayanbraindoctor: Nice profile!

<after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis>

Thanks for this fascinating hypothesis.

For myself, I believe that the top players of the past, such as <Fischer, Reshevsky, Spassky, Petrosian> whom you mentioned, are as every bit as good as the top players today. Fischer IMO is even better than Carlsen. I base my opinion on a gut-level feeling after studying their games. Fischer for example plays better chess than anyone else today.

Yet Fischer's highest rating was only 2789, only good for # 18. This by itself IMO constitutes an unambiguous evidence that rating inflation exists.

I grew up in the Karpov era. I've followed his games over the years. I am convinced he was a stronger player in the 1970s and early 1980s. It's impossible for him to have a higher rating in the 1990s, if rating inflation did not exist.

I actually believe that Capablanca in his prime would beat even Fischer and Karpov. After studying many of Capa's games, I concluded that I would never believe that a human being could play that many nearly errorless complicated games in real time, if they were not fully documented to have been played by a human being. (For example see my post in Jonathan Sarfati chessforum, and my analysis in many of Capablanca's game pages.) He had no Elo rating.

Yet many of today's rating-robots would ignore these top players of the past just because they never hit 2800.

Premium Chessgames Member
  AylerKupp: <visayanbraindoctor> Thanks for taking the time to answer and for the nice word about my profile. I'm not that familiar with Ding Liren's games and I wasn't even sure that he had had his own page, but I'm glad to see that he does. Although, with only 19 pages of kibitzing, it's clearly not that well known or popular.

I pretty much agree (and knew) about most of your observations about ratings. One thing I had not thought of in the past is the localization of players and how it influences their rating. It made me think of a recent book I was reading about linear algebra where they were characterizing sparse matrices as to whether they had block regions, regions of the matrix which had a lot of non-zero elements in a few localized and adjacent rows and columns but typically only non-zero elements in the rest of the rows and columns. Apparently there are special algorithms that can more efficiently solve systems of linear equations that have this type of matrix as their coefficients.

The usual problem of not being able to compare the relative strength of players between different eras, at least with the Elo rating system because they belong to different populations is well known. It's too bad that Jeff Sonas abandoned his calculations of Chessmetrics rankings in 2005, although I'm still not sure of its validity to compare the relative strengths (i.e. ratings) of players from different eras and populations.

I'm not sure about the feasibility of determining players' strengths by looking at their games, at least for a lot of players. In my case the obvious problem is my personal inability (because of my low playing strength) to determine the true quality of their play. Many attempts have been made to base this evaluation by comparing these top players' moves with the moves suggested by various computer engines but, in my opinion, all these attempts have been seriously flawed and it's not worth attaching much value to their conclusions.

It's also important to realize that today's top players have more tools at their disposal, mostly computer engines, databases, and tablebases, that earlier players did. And they also have more opportunities to play in tournaments against other top players, and that can't help but improve their game. So they may indeed be better, in the sense of making fewer mistakes, than players from older eras, and their higher ratings just reflect that. I just don't know. But I have no doubt that if the top players from other eras; the Capablancas, Alekhines, Fischers, Spasskys, etc. were somehow transported into the current time and given adequate time and exposure to current chess analysis tools that they would be able to hold their own against today's best players.

You might be interested in downloading the summary spreadsheet from the link in my forum header. It has a lot of charts comparing the ratings of players at different rating levels since 1966. I update it once/year so I will be doing that in Jan-2018 and it will be interesting to see if ratings inflation has indeed plateaud and is heading downwards for all rating levels. I had predicted that, based on trends, that Carlsen's rating would fall below 2800 this year but, although it seemed headed in that direction, it looks like that was premature.

Premium Chessgames Member
  visayanbraindoctor: <it will be interesting to see if ratings inflation has indeed plateaud and is heading downwards for all rating levels>

This will be interesting indeed. However, I'm not a mathematician, and so I will take your word for it when <ratings inflation has indeed plateaud>

I do know that if the top players confine themselves to playing mostly each other, then they will form a quasi-equilibrium group that will maintain their current high ratings, regardless of the quality of their games or chess strength. But you surely will know how to factor this in your calculations.

<So they may indeed be better, in the sense of making fewer mistakes, than players from older eras, and their higher ratings just reflect that.>

This is the crux of the issue. I used to think this way too. Then Bridgeburner and I made a detailed study of the Lasker - Schlechter World Championship Match (1910)

Since we had to go through their games move for move, as though they were playing in real time, every brilliancy and error of theirs hit us with as much impact as seeing modern GM games being played live in the internet. I soon came to subjectively realize that Lasker and Schlechter were playing their middlegames and endgames more or less as well as modern Champions, and as objectively confirmed by a computer.

I believe that it's their openings that are objectively worse (in the sense of being less accurate and more dubious or <more mistakes> as you say) compared to today's. However, once they got out of the book and into the middlegame, they were every bit as good as today's best players.

<the top players from other eras; the Capablancas, Alekhines, Fischers, Spasskys, etc. were somehow transported into the current time and given adequate time and exposure to current chess analysis tools that they would be able to hold their own against today's best players.>

A quick game genius like Capablanca I believe would win the blitz and rapid championship of the world even without modern opening preparations more than 50% of the time, simply by deploying quiet openings such as QGD, Spanish, or Italian, and then out blitzing his opponent in a more or less equal middlegame.

Carlsen (and Karpov in his heyday) does exactly this stuff even in classical time controls. It's ironic but it seems to be Carlsen fans that keep on claiming that Carlsen plays better because he was born in the computer age, when among top players, he is the one most likely to play 1920s 'classical' openings and eschew sharp computer opening lines. (He rarely plays Indian openings and asymmetrical openings such as the Sicilian. Faced with the Sicilian himself, he opts to steer it into 'closed' variations, as he did in his matches with Anand. Carlsen is extremely 'classical' in his approach to openings, preferring to directly occupy the center with his pawns, rather than control it indirectly by fianchettoes or by counterpunching asymmetrical openings. The way he plays his openings is similar to a 1920s master.)

Regarding Capablanca, if he were transported to the modern era, he would probably play his openings exactly like Carlsens'. Get right into a 'safe' semiclosed ot closed middlegame, and then hope to outplay his opponent. Capa would probably be just as successful too, and perhaps more so as I have reason to believe that he is a better tactician than Carlsen.

Alekhine on the other hand would prepare the sharpest of openings, and he would be overjoyed to have computers assist him. Alekhine from all accounts had an eidetic chess memory. It would be no problem for him to update himself in the sharpest of opening variations in short order from a laptop. The chess world would soon see him blasting his opponents off the board with non stop sacs and brillancies, exactly like Kasparov, live in the internet.

There is another thing that I've noticed. The stronger individual kibitzers are, the more they tend to think that rating inflation exists (but not all of course). It's mostly the (pardon the expression) patzers in that tend to think that ratings reflect absolute chess strength and totally deny any form of rating inflation. They can go through Carlsen vs Bu Xiangzhi, 2017 and Capablanca vs Marshall, 1918, and fail to realize that Capablanca was defending the position and handling its tactics (in a similar situation) better than Carlsen, on the assumption that since Capablanca had no Elo rating, he would not be able to do such a thing better than Carlsen.

It's like for them, chess has been reduced to ratings. When they see two chess players play a game, they see only how the players' respective ratings can change and fail to see the game itself.

Oct-17-17  Octavia: < The more you reply to him, the more trash he'll post.

I'm not sure about that. I stopped responding to him for a while (i.e. taking the bait) and it didn't seem to reduce his posting volume.> Of course, he'll keep on posting hoping for some others to answer him. If nobody answered he'd stop eventually.

You don't need to worry about others believing him. What does it matter?

Premium Chessgames Member
  takchess: AK, your notes on computing reminded me of Complexity Theory;

In the 1950's and 1960's, American meteorologist Edward Lorenz found that small rounding errors in his computer data (which has a limited number of significant figures) leads to large non-linear instabilities that expand exponentially in time and make long-term prediction impossible. This is the famous "Butterfly wings in Beijing" effect discovered in weather predictions.

http://www.informationphilosopher.c... found at the link above

Nov-02-17  Boomie: <I'm not sure about the feasibility of determining players' strengths by looking at their games.>

Computers can measure the tactical strengths of players only. They are oblivious to psychology, aesthetics, and other human factors that raise the game to the level of an art.

One measure which hasn't been mentioned here is the opinion of world champions and other strong players. For example, Capa, who was not effusive in his praise of other players, said he was flattered to be considered as talented as Morphy. Fischer worshipped Morphy and Botvinnik praised him. I suggest that their opinions carry more weight than pages of computer screed. They all knew that Morphy would be a formidable opponent in their times. Plus I'd wager that they would all love the opportunity to play him.

Premium Chessgames Member
  takchess: Aagaard in his Attacking Manual 1 and 2 has some interesting views on chess computer analysis. Worth checking out.
Premium Chessgames Member
  kwid: <AylerKupp:> As a member of team black in the Traxler 5.Bc4 challenge game I am disappointed from your expressed wishes to withdraw.

Please reconsider since your already made contributions to the opening theory will be of historical value.

Since we are only in an early stage of the opening it would really be interesting to see the conclusion, especially if team white has worked out the refutation of the Traxler opening.

This game could be of great theoretical value if you would help us to play the best possible variations for black to counter RV's contributions for the white side.

Premium Chessgames Member
  AylerKupp: <kwid> Sorry, but I can't reconsider. It was not a snap decision, I had been thinking about it for a while because of all the other things that I need to do. I had misgivings about joining the game from the beginning because of the known demands on my time but I decided to give it a try anyway. Unfortunately it got to the point where I didn't think I could do a good enough job and it was very frustrating, I wasn't enjoying it. And, if you're not going to enjoy the game, what's the point? So, if I was going to withdraw, I thought that it would be better for the team that I do it now rather than later.

Anyway, thanks for the kind words from you and other team members. And best of luck during the rest of the game.

Premium Chessgames Member
  AylerKupp: <<tpstar> Re: Wesley So (kibitz #214349)>

How typical. You can't dispute the assertions I made to your last post (Tata Steel (2018) (kibitz #1174)) so you withdraw into the Chessgames Home Page page cocoon hoping that I and others won't follow. Well, as you can see, that won't work.

There definitely IS a controversy around the So forum and it's growing larger every day. As far as "witnessing a bunch of sore loser crybabies who are dying to post here, but they can't, because they are anti-Wesley." that's certainly not true. These '"crybabies" can't post in the Chessgames Home Page pages because they have been banned by its webmaster based on the "suggestions" of users like you, not because they are anti-So. Then again, you think that because most users are not 110% fanatical So worshipers they are automatically anti-So. You will apparently never learn the difference.

As far as I'm concerned I've told you that I don't care one way or another whether I post on the Chessgames Home Page page. The number of times I've done that is small, and you can easily verify that. I doubt that too many users will lose any sleep about being banned from posting on the Chessgames Home Page page.

If Wesley So did not want quarrels spoiling his player page then perhaps he should not be visiting the page. At any rate, "spoilage" is often in the eye of the beholder.

As far as asking chess fans to stay out of his personal business, that's a reasonable request. But it's only a request. He is a public figure by voluntarily participating in the public arena and, as US courts have noted ( ), a public figure has significantly diminished privacy interest than others." Therefore, if he does not wish people to dig into his personal business, he should not have decided to become a public figure.

I for one would also like for <BatangaLista> to periodically update the list of so-called anti-Wesley posters in order to facilitate also banning them from posting on the Chessgames Home Page page. Then I can create a spreadsheet to show how the number of banned posters increases over time. I'm sure that <> would also be interested in this information.

Premium Chessgames Member
  tpstar: <AylerKupp> It doesn't matter what I think. It matters what Wesley thinks, and he has always been touchy about his player page. Anyone who has read that page from the top understands this - Dot Dot Dot - and anyone who remembers when he left this site before understands this - Dot Dot Dot.

Since moving to the U.S., Wesley had a rocky split with Webster, a rocky split with his parents, and a rocky split with the Barangay Wesley. Without dredging it all back up, he said some mean things about Norlito, and vice versa, then we all picked sides. Except you can't spot Norlito, and you can't spot Joselito, and you can't spot Francis, and you can't spot Glenn. They just now tried to sneak back in using the former handle of a dead person, which rightfully got banished during the latest scuffle. For the fourth time, this "fanatical contingent" you keep referencing is anti-Wesley. Blaming pro-Wesley supporters for anti-Wesley antics is a false narrative.

Read Tata Steel (2018) from the top and then try to tell me, "There is NO anti-So contingent" here. This event ended two weeks ago and has fully degenerated into a pity party bitch session by sore loser crybabies who cannot post on the Wesley So page because they are repeat offenders. Moreover, even a child could notice the blatant antagonism toward Wesley and his pro-Wesley supporters. When you burst onto his page in August 2017 and declared your opinion two years after the Great Banishment of May 2015, well, you couldn't have been more wrong. You also couldn't have been more wrong here:

<This "ever present malice" is not very real and it is mostly a figment of yours and very few others' imagination who think that anyone that is not (fanatically) pro-So is automatically (fanatically) anti-So.>

Perhaps you saw your intervention as enlightened verbosity, but apparently Team Wesley perceived it as quarreling.

I warned you to drop it, and I was right. The next person will take the hint. Meanwhile, good luck getting unbanished.

Premium Chessgames Member
  AylerKupp: <tpstar> You still don't get it. You keep trying to lump together the anti-So antagonisms and the anti-So<bot> antagonisms and they are not the same. There is no substantial anti-So antagonism; although there is beginning to be a substantial anti-So-contingent antagonism and it is increasing as a result of your silly and extreme antics.

As far as there is NO anti-So contingent here. There IS an anti-<So>bot antagonism but, since you are not willing to accept that they are not the same you think that any antagonism is a reflection on So himself and that is just not the case.

And as far as blaming pro-Wesley supporters for anti-Wesley antics I have done no such thing. I have said that <some> of these pro-Wesley "supporters" reflect poorly on him, although they delude themselves into thinking that their extreme annoying behavior actually serves him. They could not be any more wrong and have no one but themselves to blame.

I did not see any supposed "intervention" on my part as enlightened in any way, just verbose as usual. If Team Wesley perceived it as quarreling they couldn't be more wrong nor could I care less.

As far as my getting unbanished from the Wesley So page I also couldn't care less. Even if I were unbanished I wouldn't bother posting anything there. Why bother doing so when you all have a closed mind and are unwilling to tolerate any other views but your own, even if they are expressed in an objective and non-derogatory way? You extreme Wesley So fanatics just want to live in your fictional world and tolerate no discussion, just fawning and worshiping. Who in their right mind wants to be a part of that?

Premium Chessgames Member
  tuttifrutty: You forgot to say " You tell me" at the end of your question. :-)
Premium Chessgames Member
  AylerKupp: <<tuttifrutty> You forgot to say " You tell me" at the end of your question. :-)>

You're absolutely right, sorry about that. In fact, in my ever futile attempts to be less verbose, I think that I'll start abbreviating that as "YTM". Unless, of course, you register the phrase as a trademark. I think that some day this acronym will be as famous and ubiquitous as LOL, IMO, and BTW. And don't worry, if anyone ever asks me what YTM stands for, don't worry, I'll give you full credit. I already have many things to be infamous for, and I'm more than willing to share.

Premium Chessgames Member
  AylerKupp: <2018 Candidates Tournament Simulation> (part 1 of 4)

<Lambda> Below is my concept of how a tournament simulation might be implemented in Excel and how it could be used to determine each player's tournament winning probabilities. I would greatly appreciate if you could take a look at it, tell me if it seems like a reasonable approach, and let me know if it's in any way similar to what you've done.


a. Assumed Draw percentage = 4/7 or p(Draw) ~ 0.571429

b. Assumed White advantage = 35 Elo points

c. White p(Win or Draw) with no Elo rating point advantage = 0.500000

d. White p(Win or Draw) with 35 Elo rating point advantage = 0.549241

3. White's p(Win or Draw ) advantage = 0.549241- 0.500000=0.049241

Note: I display 6 digits (rounded) in the p(Win or Draw) calculations because that's the number of digits needed to ensure that each p(Win or Draw) is unique for each 1 point Elo rating difference. After rounding to 6 decimal places this allows me to do a table lookup in Excel to get the p(Win or Draw) based on the rating differential, which is faster than a search. But all the calculations are made using Excel's internal 15-digit precision.

And, while I hopefully have your attention, perhaps you could answer a few more questions:

1. What language / package did you use to implement your simulation?

2. How long does it take to run 1 million simulations?

3. Did you ever determine how many simulations needed to be run in order for the results to be statistically valid? I typically use the criteria that a result is statistically valid if there is less than a 0.05 probability that the result was due to chance.

Premium Chessgames Member
  AylerKupp: <2018 Candidates Tournament Simulation> (part 2 of 4)


<1. Calculate p(Win), p(Loss) >

For each player / player and White / Black combination, calculate the p(Win) and p(Loss) for the White player based on their rating differences per the Mar-2018 FIDE rating list. For an 8-player double round robin there will be 2 * 8 * 7 = 112 combinations.

<Example> Mamedyarov (highest rated player in the tournament) vs. Karjakin (lowest rated player). First let's consider Mamedyarov playing White:

Mamedyarov's Pre-tournament rating = 2809

Karjakin's pre-tournament rating = 2763

Rating difference (RDiff) = 46 Elo rating points

Ignoring any White advantage:

a. Mamedyarov's [ p(Win or Draw | RDiff = 46 ] = 0.564597

b. Mamedyarov's [ p(Loss or Draw | RDiff = 46 ] = 1 - 0.564597 = 0.435403

Since White's advantage is assumed to be 35 rating points and this corresponds to a 0.049241 difference in White's p(Win or Draw), I add 1/2 this amount to White's p(Win or Draw) and subtract 1/2 this amount from White's p(Loss or Draw) to get:

a. Mamedyarov's p(Win or Draw | RDiff = 46 | Mamedyarov playing White) = 0.589217

b. Mamedyarov's p(Loss or Draw | RDiff = 46 | Mamedyarov playing White) = 0.410783

Since p(Draw) = 0.571429, subtract 1/2 of this amount from Mamedyarov's p(Win or Draw) and p(Loss or Draw) so:

a. Mamedyarov's [ p(Win) | RDiff = 46 | Mamedyarov playing White ] = 0.589217 - 0.571429 / 2 = 0.303503

b. Mamedyarov's [ p(Loss) | RDiff = 46 | Mamedyarov playing White ] = 0.410783 - 0.571429 / 2 = 0.125069

As a check, p(Win) +54:54p(Draw) +54:54P)Loss) must = 1. Therefore, given that Mamedyarov is playing White and his rating advantage over Karjakin is 46 rating points:

Mamedyarov's p(Win) + p(Draw) + p(Loss) = 0.303503 + 0.571429 + 0.125069 = 1.000000

Now, let's assume that Karjakin is playing White. Then RDiff = -46 Elo rating points, Then, ignoring any White advantage:

a. Karjakin's [ p(Win or Draw | RDiff = -46 ] = 0.435403

b. Karjakin's [ p(Loss or Draw | RDiff = -46 ] = 1 - 0.435403 = 0.564596913


a. Karjakin's p(Win or Draw | RDiff = --46 | Karjakin playing White) = 0.460024

b. Karjakin's p(Loss or Draw | RDiff = -46 | Karjakin playing White) = 0.539976

c. Karjakin's [ p(Win) | RDiff = -46 | Karjakin playing White ] = 0.460024 - 0.571429 / 2 = 0.1743095

d. Karjakin's [ p(Loss) | RDiff = -46 | Karjakin playing White ] = 0.539976 - 0.571429 / 2 = 0.2542615

Again, as a check, given that Karjakin is playing White and his rating advantage over Karjakin is 46 rating points:

Karjakin's p(Win) + p(Draw) + p(Loss) = 0.174310 + 0.571429 + 0.254262 = 1.000000

Premium Chessgames Member
  AylerKupp: <2018 Candidates Tournament Simulation> (part 3 of 4)

<2. Calculate Win, Draw, and Loss ranges>

For each player / player, White / Black combination also calculate 3 sets of values:

a. Win Range = [ 0 , p(Win) ]

b. Draw Range = [ p(Win) , p(Win) + p(Draw) ]

c. Loss Range = [ p(Win) + p(Draw) , 1 ]

<Example> For Mamedyarov vs. Karjakin, Mamedyarov playing White:

a. Win Range = [ 0.000000 , 0.303503 ]

b. Draw Range = [ 0.303503 , 0.874932 ]

c. Loss Range = [ 0.874932 , 1.000000 ]

And for Mamedyarov vs. Karjakin, Karjakin playing White:

a. Win Range = [ 0.000000 , 0.174310 ]

b. Draw Range = [ 0.174310 , 0.745739 ]

c. Loss Range = [ 0.745739 , 1.000000 ]

Premium Chessgames Member
  AylerKupp: <2018 Candidates Tournament Simulation> (part 4 of 4)

<3. Determine the winner of one tournament by simulation>

For each player / player, White / Black combination, calculate a random number (RN) between 0 and 1. Score each game as follows:

a. White player wins if RN <= Win Range(2)

b. White player draws if RN > Draw Range(1), RN <= Draw Range(2)

c. White player loses if RN > Loss Range(1) (i.e. otherwise)

<Example> For Mamedyarov vs. Karjakin, Mamedyarov playing White:

a. A win for Mamedyarov if the value is <= to 0.303503

b. A draw for Mamedyarov if the value is > 0.303503, < 0.874932)

c. A loss for Mamedyarov if the value is > 0.874932

And for Mamedyarov vs. Karjakin, Karjakin playing White:

a. A win for Karjakin if the value is <= 0.174310

b. A draw for Karjakin if the value is > 0.174310, <=0.428571

c. A loss for Karjakin if the value is > 0.428571

The winner of the tournament, of course, is the player with the highest score. If two or more players have the same score, consider that each player won the tournament.

<4. Determine the tournament win probabilities>

Run a simulation of as many tournaments as desired, or as required to establish statistical significance. Determine for each players their p(Tournament Win) as the ratio between the number of tournament wins by that player and the total number of tournament wins by all players. Note that the total number of tournament wins by all players will likely be greater than the number of simulations run because more than one player might tie for first place in any simulated tournament.

Hopefully all this makes sense to you.

Premium Chessgames Member
  AylerKupp: <Lambda> I don't know how you arrived at your assumptions of a draw percentage = 4/7 ~ 57.14% or your assumed White advantage = 35 Elo points. But in case you're interested here's some data that validates your assumptions using the ChessTempo database.

The ChessTempo database ( currently contains over 1.7 million chess games of all types (Classic time control, Blitz, rapid, etc.). One thing that makes it useful is that you can easily filter it to consider only games where both players were rated 2200+, 2300+, ..., 2700+. The latter is particularly useful for determining % Win, % Draw, and % Loss for the White player in a super strong tournament like the 2018 Candidates.

The last time I looked at the database in detail was in May-2017. At that time it had 14,502 games of all types where both players were rated 2700+. I filtered the database to exclude games played in events that had Blitz, Rapid, Exhib(ition), Blind(fold), and Simul(taneous) in their title, and the remaining 8,879 games I assumed to have been played at Classic time controls. Not perfect, but probably close.

Of these 8,879 games all were played since 2000 so they are probably all relevant. White won 26.23% of the games, lost 15.73% of the games, and 58.04% of the games were drawn. Clearly the 58.04% is very close to your 57.14% assumption for a draw percentage.

The percentage of White wins + the percentage of Black wins was 41.96%. Calculating the White advantage as White Win % - 1/2 * (White + Black win % ) = 26.33% - 20.92% = 5.25% or a White p(Win) of 0.552. This corresponds to a rating differential of +37 Elo rating points. Again, very close to your assumed +35 Elo rating points.

Just thought that you might be interested to know.

Premium Chessgames Member
  Lambda: My simulation tool is written in Python, and it takes slightly over a minute to run a million simulations. (Less time once the tournament starts and some of the game results are already determined.)

I haven't attempted to define "statistical validity", but the results from a million trials don't tend to change from run to run by more than 0.1%.

I have no insights about what a good way to use Excel to do this because I've never used Excel in my life, and indeed I've never willingly used any tool from any "office suite" in my life. Markup languages for text formatting, and programming languages for data processing is my attitude.

But other than what you need to do to work around the limitations of your tool, that sounds about right to me. I haven't checked your details, but in approach, the only obvious differences I have are that:

My "draw area" is the first four sevenths of my random number, so I can generate the random number, immediately check whether it's a draw, and if it is, I don't have to do any further calculations for that game, for efficiency, and

At the end, I check for ties and try applying all the tie-breaks to my cross table to get the one winner, and if they're all tied too, call a tie a ninth result.

Premium Chessgames Member
  AylerKupp: <Lambda> Thanks for responding. I use Excel mainly for entering the parameters (players, ratings, draw %, etc.) so that they are easily visible and changeable. The bulk of the work I do using Visual Basic for Applications (VBE) mainly because I'm familiar with it, and I start the simulation by invoking a macro. I've been trying to learn Python off and on for years because I find it elegant, but I've never gotten it to work on my computer for unknown reasons. Then again I haven't tried very hard.

I'm encouraged that no only do you think (at least at first glance) that my concept is reasonable (or, at least, not unreasonable) and that it only takes a little over a minute to run a million simulations. Since both Python and VBE are interpreted, VBE should be able to run in the ballpark, although I'm sure that Python is much more efficient. After all, VBE is a Microsoft product.

The reason I mentioned statistical significance testing was that, if it took 1 million simulations a long time to run, then the number of simulations probably could have been reduced to save time. But, if it only takes slightly over a minute (or two, or five), then it's not an issue.

Yes, for efficiency, I was going to check for a draw first since that's the statistically most likely result; it's just second nature to me after many years of writing real-time software. But since that was not relevant to the concept, I didn't bother to mention it. At any rate, given the small amount of time that it takes to run 1 million simulations, this type of efficiency is probably not significant.

I had not considered using the tie-breaks to get the one winner; thanks for the tip. It makes more sense. But, reviewing the tie-break rules for this tournament, after the first 3 tie-breakers the players need to first play 2 rapid games, then up to 4 blitz games, and then a sudden death game. To be more "accurate", I would think that additional simulations would have to be run with the player's rapid and blitz ratings used, and I doubt that there are any ratings for sudden death games. So that's more work. Then again, the likelihood that there would be a tie after using Sonnenborn-Berger is miniscule, so bothering to implement rapid, blitz, and sudden death tie-break simulations is probably not worth the effort.

FWIW, I like the tie-break sequence in the Candidates and I wish more tournaments would use it. It encourages trying for wins just in case and it's based primarily on the results obtained by the players in the actual tournament based on games at classic time controls. So all the arguments for and against using rapid and blitz time controls to determine the results of a tournament played at classic time controls can be avoided for the most part.

Mar-31-18  qqdos: <Dear AK> would you like to take a quick look at the invitation at Bobby's flawed Gem vs Geller [B89]. kind regards.
May-11-18  yskid: I've just posted on "Naiditch game" site ;
12.a3 line played in the correspondence championship game
May-18-18  djvanscoy: <AylerKupp> "It made me think of a recent book I was reading about linear algebra where they were characterizing sparse matrices as to whether they had block regions, regions of the matrix which had a lot of non-zero elements in a few localized and adjacent rows and columns but typically only non-zero elements in the rest of the rows and columns."

I'm guessing you meant to say, "...typically only zero entries in the rest of the rows and columns"? In other words, some block is dense but the rest of the matrix is sparse?

"But I have no doubt that if the top players from other eras; the Capablancas, Alekhines, Fischers, Spasskys, etc. were somehow transported into the current time and given adequate time and exposure to current chess analysis tools that they would be able to hold their own against today's best players."

I agree with you, and indeed I couldn't help but think that Carlsen's rook-and-pawn endgame blunder on move 54 of his game against Caruana in the first round of the 2018 GRENKE tournament (Caruana vs Carlsen, 2018) would not have been made by Capablanca. But maybe in this case I'm afflicted with a bit of hero-worship.

Premium Chessgames Member
  AylerKupp: <<FSR> You're right - 13-12! I can't imagine that there are many tournaments, at whatever time control, where Black wins more often than White.>

Thanks for the link to your fine article. Itís good to see that others recognize that the percentage of draws increases as the rating of the players increases. Which is not surprising given that itís generally (not unanimously) accepted that in order for one player to win a game the other player must make at least one mistake or a series of inaccuracies. So, since the higher rated the player the better he generally is, itís not surprising that the higher rated the players the less the likelihood that one of them will make a mistake. Hence, the greater the percentage of draws.

One good way to see this is to look at the database. In addition to listing the win/lose/draw result percentages for all the games in its database, it gives you the ability to filter the games according to the rating of the 2 players; 2200+ (both players rated higher than 2200), 2300+ (both players rated higher than 2300), etc.

So here are the current (todayís) database snapshot from Whiteís perspective:

Rating # Games Win % Draw % Loss %

All 3,459,235 38.4% 31.4% 30.2%

2200+ 1,712,350 35.1% 39.5% 25.5%

2300+ 1,176,981 33.5% 43.0% 23.4%

2400+ 692,046 31.9% 46.8% 21.3%

2500+ 266,553 30.0% 50.9% 19.1%

2600+ 73,269 29.4% 51.9% 18.6%

2700+ 16,510 28.7% 52.2% 19.1%

Clearly the number of draws increases as the ratings of the players increases. The percentages, however, are somewhat ďcontaminatedĒ since the database includes games at classic, rapid, and blitz time control as well as blindfold, exhibition, etc. And itís not easy to filter the various categories other by looking at the names of the events, and those are not always sufficiently descriptive.

Jump to page #   (enter # from 1 to 54)
search thread:   
< Earlier Kibitzing  · PAGE 54 OF 54 ·  Later Kibitzing>
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No posting personal information of members.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.

NOTE: Keep all discussion on the topic of this page. This forum is for this specific user and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of, its employees, or sponsors.
Participating Grandmasters are Not Allowed Here!

You are not logged in to
If you need an account, register now;
it's quick, anonymous, and free!
If you already have an account, click here to sign-in.

View another user profile:

home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | contact us
Copyright 2001-2018, Chessgames Services LLC