chessgames.com
Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

 
Chessgames.com User Profile Chessforum

AylerKupp
Member since Dec-31-08 · Last seen Nov-23-17
About Me (in case you care):

Old timer from Fischer, Reshevsky, Spassky, Petrosian, etc. era. Active while in high school and early college, but not much since. Never rated above low 1800s and highly erratic; I would occasionally beat much higher rated players and equally often lose to much lower rated players. Highly entertaining combinatorial style, everybody liked to play me since they were never sure what I was going to do (neither did I!). When facing a stronger player many try to even their chances by steering towards simple positions to be able to see what was going on. My philosophy in those situations was to try to even the chances by complicating the game to the extent that neither I nor the stronger player would be able to see what was going on! Alas, this approach no longer works in the computer age. And, needless to say, my favorite all-time player is Tal.

I also have a computer background and have been following with interest the development in computer chess since the days when computers couldn't always recognize illegal moves and a patzer like me could beat them with ease. Now itís me that canít always recognize illegal moves and any chess program can beat me with ease.

But after about 4 years (a lifetime in computer-related activities) of playing computer-assisted chess, I think I have learned a thing or two about the subject. I have conceitedly defined "AylerKupp's corollary to Murphy's Law" (AKC2ML) as follows:

"If you use your engine to analyze a position to a search depth=N, your opponent's killer move (the move that will refute your entire analysis) will be found at search depth=N+1, regardless of the value you choose for N."

Iím also a food and wine enthusiast. Some of my favorites are German wines (along with French, Italian, US, New Zealand, Australia, Argentina, Spain, ... well, you probably get the idea). One of my early favorites were wines from the Ayler Kupp vineyard in the Saar region, hence my user name. Here is a link to a picture of the village of Ayl with a portion of the Kupp vineyard on the left: http://en.wikipedia.org/wiki/File:A...

You can send me an e-mail whenever you'd like to aylerkupp gmail.com.

And check out a picture of me with my "partner", Rybka (Aylerkupp / Rybka) from the CG.com Masters - Machines Invitational (2011). No, I won't tell you which one is me.

-------------------

Analysis Tree Spreadsheet (ATSS).

The ATSS is a spreadsheet developed to track the analyses posted by team members in various on-line games (XXXX vs. The World, Team White vs. Team Black, etc.). It is a poor man's database which provides some tools to help organize and find analyses.

I'm in the process of developing a series of tutorials on how to use it and related information. The tutorials are spread all over this forum, so here's a list of the tutorials developed to date and links to them:

Overview: AylerKupp chessforum (kibitz #843)

Minimax algorithm: AylerKupp chessforum (kibitz #861)

Principal Variation: AylerKupp chessforum (kibitz #862)

Finding desired moves: AylerKupp chessforum (kibitz #863)

Average Move Evaluation Calculator (AMEC): AylerKupp chessforum (kibitz #876)

-------------------

ATSS Analysis Viewer

I added a capability to the Analysis Tree Spreadsheet (ATSS) to display each analysis in PGN-viewer style. You can read a brief summary of its capabilities here AylerKupp chessforum (kibitz #1044) and download a beta version for evaluation.

-------------------

Ratings Inflation

I have recently become interested in the increase in top player ratings since the mid-1980s and whether this represents a true increase in player strength (and if so, why) or if it is simply a consequence of a larger chess population from which ratings are derived. So I've opened up my forum for discussions on this subject.

I have updated the list that I initially completed in Mar-2013 with the FIDE rating list through 2016 (published in Jan-2017), and you can download the complete data from http://www.mediafire.com/file/zbrlx.... It is quite large (158 MB) and to open it you will need Excel 2007 or later version or a compatible spreadsheet since several of the later tabs contain more than 65,536 rows.

The spreadsheet also contains several charts and summary information. If you are only interested in that and not the actual rating lists, you can download a much smaller (813 KB) spreadsheet containing the charts and summary information from http://www.mediafire.com/file/k9i67.... You can open this file with a pre-Excel 2007 version or a compatible spreadsheet.

FWIW, after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis and it goes something like this:

1. Initially (late 1960s and 1970s) the ratings for the strongest players were fairly constant.

2. In the 1980s the number of rated players began to increase exponentially, and they entered the FIDE-rated chess playing population mostly at the lower rating levels. The ratings of the stronger of these players increased as a result of playing weaker players, but their ratings were not sufficiently high to play in tournaments, other than open tournaments, where they would meet middle and high rated players.

3. Eventually they did. The ratings of the middle rated players then increased as a result of beating the lower rated players, and the ratings of the lower rated players then leveled out and even started to decline. You can see this effect in the 'Inflation Charts' tab, "Rating Inflation: Nth Player" chart, for the 1500th to 5000th rated player.

4. Once the middle rated players increased their ratings sufficiently, they began to meet the strongest players. And the cycle repeated itself. The ratings of the middle players began to level out and might now be ready to start a decrease. You can see this effect in the same chart for the 100th to 1000th rated player.

5. The ratings of the strongest players, long stable, began to increase as a result of beating the middle rated players. And, because they are at the top of the food chain, their ratings, at least so far, continue to climb. I think that they will eventually level out but if this hypothesis is true there is no force to drive them down so they will stay relatively constant like the pre-1986 10th rated player and the pre-1981 50th rated player. When this leveling out will take place, if it does, and at what level, I have no idea. But a look at the 2016 ratings data indicates that, indeed, it may have already started.

You can see in the chart that the rating increase, leveling off, and decline first starts with the lowest ranking players, then through the middle ranking players, and finally affects the top ranked players. It's not precise, it's not 100% consistent, but it certainly seems evident. And the process takes decades so it's not easy to see unless you look at all the years and many ranked levels.

Of course, this is just a hypothesis and the chart may look very different 20 years from now. But, at least on the surface, it doesn't sound unreasonable to me.

But looking at the data through 2016 it is even more evident that the era of ratings inflation appears to be over. The previous year's trends have either continued or accelerated; the rating for every ranking category, except for possibly the 10th ranked player (a possible trend is unclear), has either flattened out or has started to decline as evidenced by the trendlines.

-------------------

Chess Engine Non-Determinism

I've discussed chess engine non-determinism many times. If you run an analysis of a position multiple times, with the same engine, the same computer, and to the same search depth, you will get different results. Not MAY, WILL. Guaranteed. Similar results were reported by others.

I had a chance to run a slightly more rigorous test and described the results starting here: US Championship (2017) (kibitz #633). I had 3 different engines (Houdini 4, Komodo 10, and Stockfish 8 analyze the position in W So vs Onischuk, 2017 after 13...Bxd4, a highly complex tactical position. I made 12 runs with each engine; 3 each with threads=1, 2, 3, and 4 on my 32-bit 4-core computer with 4 MB RAM and MPV=3. The results were consistent with each engine:

(a) With threads=1 (using a single core) the results of all 3 engines were deterministic. Each of the 3 engines on each of the analyses selected the same top 3 moves for each engine, with the same evaluations, and obviously the same move rankings.

(b) With threads =2, 3, and 4 (using 2, 3, and 4 cores) none of the engines showed deterministic behavior. Each of the 3 engines on each of the analyses occasionally selected different analyses for the same engine, with different evaluations, and different move rankings.

I've read that the technical reason for the non-deterministic behavior is the high sensitivity of the alpha-beta algorithms that all the top engines use to move ordering in their search tree, and the variation of this move ordering using multi-threaded operation when each of the threads gets interrupted by higher-priority system processes. I have not had the chance to verify this, but there is no disputing the results.

What's the big deal? Well if the same engine gives different results each time it runs, how can you determine what's the real "best" move? Never mind that different engines or relatively equal strength (as determined by their ratings) give different evaluations and move rankings for their top 3 move and that the evaluations may differ as a function of the search depth.

Since I believe in the need to run analyses of a given position using more than one engine and then aggregating the results to try to reach a more accurate assessment of a position, I typically have run sequential analyses of the same position using 4 threads and a hash table = 1,024 MB. But since I typically run 3 engines, I found it to be more efficient to run analyses using all 3 engines concurrently, each with a single thread and a hash table = 256 MB (to prevent swapping to disk). Yes, running with a single thread runs at 1/2 the speed of running with 4 threads but then running the 3 engines sequentially requires 3X the time and running the 3 engines concurrently requires only 2X the time for a 50% reduction in the time to run all 3 analyses to the same depth, and resolving the non-determinism issues.

So, if you typically run analyses of the same position with 3 engines, consider running them concurrently with threads=1 rather than sequentially with threads=4. You'll get deterministic results in less total time.

-------------------

Any comments, suggestions, criticisms, etc. are both welcomed and encouraged.

-------------------

Chessgames.com Full Member

   AylerKupp has kibitzed 10451 times to chessgames   [more...]
   Nov-23-17 Team White vs Team Black, 2017 (replies)
   Nov-09-17 Korchnoi vs Polugaevsky, 1980 (replies)
 
AylerKupp: <Howard> Informant 30 is very old (1981). According to the 7-piece Lomonosov tablebases, after 53.h4 Rxe2 the position is a draw.
 
   Nov-08-17 Short vs Vaganian, 1989 (replies)
 
AylerKupp: Very simple, maybe too simple for a Wednesday, but I didn't see it. I looked at 51.Qc3+ K(any) 52.Qxg2+ Kxg2 53.a5(!) and I figured that would win with a pawn advantage in spite of BOC. But, of course, the game continuation is much, much beter.
 
   Nov-07-17 chessgames.com chessforum (replies)
 
AylerKupp: <chessganes.com> How can I contact <Administrator> for Team Black concerning his clarification on computer use. I'm copying the pertinent portion since I don't want to post a link to the Team Black page: <Because of the integration with Stockfish and the Opening ...
 
   Nov-05-17 Anand vs Beliavsky, 1991 (replies)
 
AylerKupp: <beenthere240> Not to mention the Nf3 after either 45.Qxc5 Nxc5 46.bxc5 (or 46.dxc5) 46...Rxf3 or 45.bxc5 (or 45.dxc5) 45...Nxc3 46.Kxc3 Rxf3+.
 
   Nov-02-17 Friedrich vs P Leisebein, 1987
 
AylerKupp: <BadTemper> I didnít see it either. But per Stockfish 8 Black is threatening mate in 6 by 15ÖQxg3+ 16.Rf2 Nxf3+ 17.Qxf3 Bxf2+ 18.Kf1 Qxf3 19.d3 Bg3+ 20.Kg1 Qf1#, so White needs to take action. At d=27 Stockfish 8 indicates that Black mates after any White move except 14.d3; ...
 
   Oct-31-17 Thematic Challenge chessforum (replies)
 
AylerKupp: Well, if the voting holds up, Iíll have to flush all my preparations to play the White side of the Evans Gambit. Which is too bad since I actually have a book on it which <claims> that indeed itís White who has the advantage. I had bought the book some time ago and itís ...
 
   Oct-27-17 Wei Yi vs Ivanchuk, 2017 (replies)
 
AylerKupp: <<et1> Real champions and Ivanchuk most than everybody they do not calculate odds of winning or drawing and adjust openings accordingly. They just play. They know they may win whatever they play.> Well, I can't say for sure since I'm obviously not at the top level or ...
 
   Oct-27-17 Fischer - Spassky World Championship Match (1972) (replies)
 
AylerKupp: <diceman> I'm not sure what you mean by "Since this is six years before Spassky it makes me believe Fischer even more." Do you think that sensitivity to noise increases, decreases, or stays approximately the same as one gets older? At any rate, I hope that you are selective
 
   Oct-26-17 Mephisto vs NN, 1770 (replies)
 
AylerKupp: <vonKrolock> If you attribute the game to Mephisto and change the date from 1770 to 1883 that would make NN 113 years older. So you must admit that Black played very well for a player of such an advanced age!
 
(replies) indicates a reply to the comment.

De Gustibus Non Disputandum Est

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 54 OF 54 ·  Later Kibitzing>
Nov-11-16
Premium Chessgames Member
  Tiggler: Got the message that you are interested, and I do remember my offer to put together the explanation.

Just now I'm pondering how well to fulfill that commitment. It deserves a well crafted dissertation. A few hints with invitations to further question is the minimum default.

The longer you have to wait for my response, the more you are entitled to a big effort.

Nov-11-16
Premium Chessgames Member
  Tiggler: Here is the first essential link: https://en.wikipedia.org/wiki/Marko...

Elo approach assumes:

(1) that there is a Markov process at work: there is a current state and a set of transition probabilities to the next state that depend only on the current state.

(2) that the current state is fully described by a set of numbers that are the rating of each player.

(3) that the transition probabilities are dependent only on the differences between ratings of the two players in each game. The probabilities do not depend on the location parameter: https://en.wikipedia.org/wiki/Locat...

Let's pause while I decide what is the next step and you digest that.

Nov-12-16
Premium Chessgames Member
  AylerKupp: <Tiggler> Donít trouble yourself, donít feel bound by your offer, and, above all, donít let it be a burden to yourself. I am interested but I am in no rush to find out the information. After all the current rating system will likely be with us for a long time.

So, yes, letís definitely pause and you can get back to it whenever you have both the time and the inclination. In the meantime, Iíll look at the link you provided and refresh my knowledge of Markov chains.

Nov-12-16
Premium Chessgames Member
  Tiggler: I need to insert some discussion of items (1), (2) and (3) of the previous post. Are these assumptions, approximations, or what?

I chose to view them as axioms. The rating scale is generated from them. For example, concerning (3) - why should we believe this it is the case that the transition probabilities associated with a game between a 2200 and a 2300 player are the same as those associated with a game between a 2700 and a 2800 player? We don't have to believe it, because we have asserted it as an axiom. Thus we can say that the rating intervals between pairs of ratings is defined as the interval that corresponds to a given set of transition probabilities. The scale is generated from this assumption, just as the Celsius temperature scale was defined by the requirement that the change in resistance of a platinum thermometer between the ice point and boiling point of water corresponds to 100, and therefore 200 Celsius is defined as that temperature which results in this same change when compared to the boiling point of water.

One snag, however, is that we do not have two fixed points! So instead we chose some arbitrary value of the expected game score between players of a given rating difference, which in turn defines the state transition probabilities of our Markov process.

Pause for thought ...

Nov-12-16
Premium Chessgames Member
  Tiggler: At this point it is time to mention two crucial examples of the Markov process: Wiener process and Ornstein-Uhlenbeck process. They were explained to me in 2012 by <Gypsy> on this page: Hans Arild Runde

Wiki's explanations are here:
https://en.wikipedia.org/wiki/Wiene... and https://en.wikipedia.org/wiki/Ornst...

Nov-12-16
Premium Chessgames Member
  Tiggler: And next we need to know about Martingales: https://en.wikipedia.org/wiki/Marti...

The feature of Martingales that makes them relevant to our discussion is this:

Consider the universe of rated games of chess as contests for rating points. If I want to bet on the result, I might use the "expected scores" that are the ones "predicted" by the rating procedure. If the actual <expected value> https://en.wikipedia.org/wiki/Expec... is equal to the one used in the rating procedure, then the contest for rating points has the expectation that the gain/loss of points by each player is , on average, zero. The contest is a fair game, and therefore the process is a Martingale.

Nov-14-16
Premium Chessgames Member
  Tiggler: Almost there:

If the stochastic process defined by chess games, chess ratings, and FIDE rating regulations is a Martingale, then we can invoke the Martingale Central Limit Theorem:

https://en.wikipedia.org/wiki/Marti...

This says that as the number of steps (games, tournaments, whatever) increases the change in ratings from initial values tends to a Gaussian distribution with zero mean and variance that is proportional to the number of steps.

This proves my statement made half a page above, AylerKupp chessforum

"If all players perform on average according to their current rating with random variation in accordance with the distribution used to generate their new ratings, then the population rating distribution must necessarily diverge. The population distribution will assume a gaussian shape and will keep a constant mean, but the standard deviation of the population ratings diverges and will NECESSARILY increase without limit."

Nov-14-16
Premium Chessgames Member
  Tiggler: There are many interesting corollaries, concerning, for example, rating floors; 400-point rule etc.

Also, I have not proved that ratings actually perform this way, because the proof depends on the assumption that players actually perform according to their ratings. If not, then the process is not a Martingale.

Before discussing an alternative, the Ornstein-Uhlenbeck process https://en.wikipedia.org/wiki/Ornst..., I'd like some feedback.

Nov-26-16
Premium Chessgames Member
  AylerKupp: <Tiggler> Sorry, but I've been busy with several personal obligations and I haven't had much time to devote to chess. And what little time I've had has been devoted to following the Carlsen - Karjakin match.

One obvious comment, players don't necessarily perform according to their ratings. In every tournament there are players who perform better than expected and players who perform worse than expected. If that wasn'tthe case then there wouldn't be a point of having tournaments or matches, the winners would be known beforehand.

So any "proof" must be probabilistically based on the spread of player's performance, and I'm not sure if that is possible or meaningful.

Nov-26-16
Premium Chessgames Member
  Tiggler: <AylerKupp>

It is obvious of course that players cannot perform exactly according to their rating "expected score", except is a statistical sense. What I said before assumes this:

<If the actual <expected value> is equal to the one used in the rating procedure, then the contest for rating points has the expectation that the gain/loss of points by each player is, on average, zero. The contest is a fair game, and therefore the process is a Martingale.>

Dec-24-16
Premium Chessgames Member
  Golden Executive: Merry Christmas and a Happy New Year 2017 to you and yours <AylerKupp>!
Dec-24-16
Premium Chessgames Member
  WinKing: Merry Christmas to you <AK>! :)
Jan-17-17
Premium Chessgames Member
  Golden Executive: Happy Birthday <AylerKupp>!
May-17-17
Premium Chessgames Member
  zanzibar: <AK> - idol curiousity, but is your avatar an image of a wine label?

Or what?

.

Jun-07-17
Premium Chessgames Member
  AylerKupp: <zanzibar> Yes, my avatar is a copy of a wine label from the Ayler Kupp vineyard in the Saar Valley, Germany. German wines are some of my favorites and the wines from the Ayler Kupp vineyard were one of the first German wines from the Saar region that I tried and liked very much.

Alas, the label did not come out too clearly. As <morfishine> remarked, it looks like a washed-up diploma. And he is right. I tried to make it sharper by changing all the gray pixels to black pixels but that was very time consuming because it was a pixel-by-pixel operation with the archaic software that I have. I never finished it and now, as a result of my oft-mentioned disk crash and data loss, I lost what little I had done. Maybe some day (doubtful) I will be sufficiently motivated to start over. Perhaps if I drink enough wine ...

Sep-01-17
Premium Chessgames Member
  cro777: <AylerKupp> If you have an idea what else can be mined from this data set:

http://blog.scottlogic.com/2017/09/...

Sep-20-17
Premium Chessgames Member
  AylerKupp: <cro777> Thanks for the link. I just saw this post since I don't check my forum very often. It's a lot to digest and it will take me a while since I have several other things to attend, but I'll get back to you, hopefully soon.
Sep-23-17
Premium Chessgames Member
  visayanbraindoctor: <Why do you believe that Ding Liren is underrated?>

I decided to answer in your forum in order to avoid rating-robots (kibitzers that are obsessed with ratings) butting in with repetitive strawmen and red herring arguments. In brief, Ding has played a lot of games in recent years in Chinese tournaments, which have more lower rated players. I expect he doesn't gain as many points (and occasionally might even lose points) as he would have had he always been playing in Europe; while playing the same quality chess. I believe that if he had been playing purely in Europe for the past few years, his rating would be higher than So's.

My opinion on Elo ratings and their use in the World Championship cycle are summarized in my profile, #8 entry.

<8. On ratings:

Elo ratings reflect relative and not absolute chess strength.

Chessplayers are naturally arranged in populations partitioned by geopolitical regions & time periods that have infrequent contacts with one another. Within such a population, players get to play each other more frequently, thus forming a quasi-equilibrium group wherein individual ratings would tend to equilibrate quickly; but not with outside groups. With caveats & in the proper context, FIDE/Elo ratings are simply fallible descriptors & predictors of an active player's near-past & near-future performances against other rated players, & only within the same quasi-equilibrium group.

As corollaries: the best way to evaluate a player's strength is to analyze his games & not his ratings; one cannot use ratings to accurately compare the quality of play of players from the past and present, or even the same player say a decade ago and today; & care should be taken in the use of ratings as a criterion in choosing which players to seed into the upper levels of the WC cycle. All the above often entail comparisons between players from different quasi-equilibrium groups separated by space and/or time.>

Sep-23-17
Premium Chessgames Member
  visayanbraindoctor: Nice profile!

<after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis>

Thanks for this fascinating hypothesis.

For myself, I believe that the top players of the past, such as <Fischer, Reshevsky, Spassky, Petrosian> whom you mentioned, are as every bit as good as the top players today. Fischer IMO is even better than Carlsen. I base my opinion on a gut-level feeling after studying their games. Fischer for example plays better chess than anyone else today.

Yet Fischer's highest rating was only 2789, only good for # 18. This by itself IMO constitutes an unambiguous evidence that rating inflation exists.

I grew up in the Karpov era. I've followed his games over the years. I am convinced he was a stronger player in the 1970s and early 1980s. It's impossible for him to have a higher rating in the 1990s, if rating inflation did not exist.

I actually believe that Capablanca in his prime would beat even Fischer and Karpov. After studying many of Capa's games, I concluded that I would never believe that a human being could play that many nearly errorless complicated games in real time, if they were not fully documented to have been played by a human being. (For example see my post in Jonathan Sarfati chessforum, and my analysis in many of Capablanca's game pages.) He had no Elo rating.

Yet many of today's rating-robots would ignore these top players of the past just because they never hit 2800.

Sep-24-17
Premium Chessgames Member
  AylerKupp: <visayanbraindoctor> Thanks for taking the time to answer and for the nice word about my profile. I'm not that familiar with Ding Liren's games and I wasn't even sure that he had had his own page, but I'm glad to see that he does. Although, with only 19 pages of kibitzing, it's clearly not that well known or popular.

I pretty much agree (and knew) about most of your observations about ratings. One thing I had not thought of in the past is the localization of players and how it influences their rating. It made me think of a recent book I was reading about linear algebra where they were characterizing sparse matrices as to whether they had block regions, regions of the matrix which had a lot of non-zero elements in a few localized and adjacent rows and columns but typically only non-zero elements in the rest of the rows and columns. Apparently there are special algorithms that can more efficiently solve systems of linear equations that have this type of matrix as their coefficients.

The usual problem of not being able to compare the relative strength of players between different eras, at least with the Elo rating system because they belong to different populations is well known. It's too bad that Jeff Sonas abandoned his calculations of Chessmetrics rankings in 2005, although I'm still not sure of its validity to compare the relative strengths (i.e. ratings) of players from different eras and populations.

I'm not sure about the feasibility of determining players' strengths by looking at their games, at least for a lot of players. In my case the obvious problem is my personal inability (because of my low playing strength) to determine the true quality of their play. Many attempts have been made to base this evaluation by comparing these top players' moves with the moves suggested by various computer engines but, in my opinion, all these attempts have been seriously flawed and it's not worth attaching much value to their conclusions.

It's also important to realize that today's top players have more tools at their disposal, mostly computer engines, databases, and tablebases, that earlier players did. And they also have more opportunities to play in tournaments against other top players, and that can't help but improve their game. So they may indeed be better, in the sense of making fewer mistakes, than players from older eras, and their higher ratings just reflect that. I just don't know. But I have no doubt that if the top players from other eras; the Capablancas, Alekhines, Fischers, Spasskys, etc. were somehow transported into the current time and given adequate time and exposure to current chess analysis tools that they would be able to hold their own against today's best players.

You might be interested in downloading the summary spreadsheet from the link in my forum header. It has a lot of charts comparing the ratings of players at different rating levels since 1966. I update it once/year so I will be doing that in Jan-2018 and it will be interesting to see if ratings inflation has indeed plateaud and is heading downwards for all rating levels. I had predicted that, based on trends, that Carlsen's rating would fall below 2800 this year but, although it seemed headed in that direction, it looks like that was premature.

Sep-30-17
Premium Chessgames Member
  visayanbraindoctor: <it will be interesting to see if ratings inflation has indeed plateaud and is heading downwards for all rating levels>

This will be interesting indeed. However, I'm not a mathematician, and so I will take your word for it when <ratings inflation has indeed plateaud>

I do know that if the top players confine themselves to playing mostly each other, then they will form a quasi-equilibrium group that will maintain their current high ratings, regardless of the quality of their games or chess strength. But you surely will know how to factor this in your calculations.

<So they may indeed be better, in the sense of making fewer mistakes, than players from older eras, and their higher ratings just reflect that.>

This is the crux of the issue. I used to think this way too. Then Bridgeburner and I made a detailed study of the Lasker - Schlechter World Championship Match (1910)

Since we had to go through their games move for move, as though they were playing in real time, every brilliancy and error of theirs hit us with as much impact as seeing modern GM games being played live in the internet. I soon came to subjectively realize that Lasker and Schlechter were playing their middlegames and endgames more or less as well as modern Champions, and as objectively confirmed by a computer.

I believe that it's their openings that are objectively worse (in the sense of being less accurate and more dubious or <more mistakes> as you say) compared to today's. However, once they got out of the book and into the middlegame, they were every bit as good as today's best players.

<the top players from other eras; the Capablancas, Alekhines, Fischers, Spasskys, etc. were somehow transported into the current time and given adequate time and exposure to current chess analysis tools that they would be able to hold their own against today's best players.>

A quick game genius like Capablanca I believe would win the blitz and rapid championship of the world even without modern opening preparations more than 50% of the time, simply by deploying quiet openings such as QGD, Spanish, or Italian, and then out blitzing his opponent in a more or less equal middlegame.

Carlsen (and Karpov in his heyday) does exactly this stuff even in classical time controls. It's ironic but it seems to be Carlsen fans that keep on claiming that Carlsen plays better because he was born in the computer age, when among top players, he is the one most likely to play 1920s 'classical' openings and eschew sharp computer opening lines. (He rarely plays Indian openings and asymmetrical openings such as the Sicilian. Faced with the Sicilian himself, he opts to steer it into 'closed' variations, as he did in his matches with Anand. Carlsen is extremely 'classical' in his approach to openings, preferring to directly occupy the center with his pawns, rather than control it indirectly by fianchettoes or by counterpunching asymmetrical openings. The way he plays his openings is similar to a 1920s master.)

Regarding Capablanca, if he were transported to the modern era, he would probably play his openings exactly like Carlsens'. Get right into a 'safe' semiclosed ot closed middlegame, and then hope to outplay his opponent. Capa would probably be just as successful too, and perhaps more so as I have reason to believe that he is a better tactician than Carlsen.

Alekhine on the other hand would prepare the sharpest of openings, and he would be overjoyed to have computers assist him. Alekhine from all accounts had an eidetic chess memory. It would be no problem for him to update himself in the sharpest of opening variations in short order from a laptop. The chess world would soon see him blasting his opponents off the board with non stop sacs and brillancies, exactly like Kasparov, live in the internet.

There is another thing that I've noticed. The stronger individual kibitzers are, the more they tend to think that rating inflation exists (but not all of course). It's mostly the (pardon the expression) patzers in CG.com that tend to think that ratings reflect absolute chess strength and totally deny any form of rating inflation. They can go through Carlsen vs Bu Xiangzhi, 2017 and Capablanca vs Marshall, 1918, and fail to realize that Capablanca was defending the position and handling its tactics (in a similar situation) better than Carlsen, on the assumption that since Capablanca had no Elo rating, he would not be able to do such a thing better than Carlsen.

It's like for them, chess has been reduced to ratings. When they see two chess players play a game, they see only how the players' respective ratings can change and fail to see the game itself.

Oct-17-17
Premium Chessgames Member
  Octavia: < The more you reply to him, the more trash he'll post.

I'm not sure about that. I stopped responding to him for a while (i.e. taking the bait) and it didn't seem to reduce his posting volume.> Of course, he'll keep on posting hoping for some others to answer him. If nobody answered he'd stop eventually.

You don't need to worry about others believing him. What does it matter?

Nov-01-17
Premium Chessgames Member
  takchess: AK, your notes on computing reminded me of Complexity Theory;

In the 1950's and 1960's, American meteorologist Edward Lorenz found that small rounding errors in his computer data (which has a limited number of significant figures) leads to large non-linear instabilities that expand exponentially in time and make long-term prediction impossible. This is the famous "Butterfly wings in Beijing" effect discovered in weather predictions.

http://www.informationphilosopher.c... found at the link above

Nov-02-17
Premium Chessgames Member
  Boomie: <I'm not sure about the feasibility of determining players' strengths by looking at their games.>

Computers can measure the tactical strengths of players only. They are oblivious to psychology, aesthetics, and other human factors that raise the game to the level of an art.

One measure which hasn't been mentioned here is the opinion of world champions and other strong players. For example, Capa, who was not effusive in his praise of other players, said he was flattered to be considered as talented as Morphy. Fischer worshipped Morphy and Botvinnik praised him. I suggest that their opinions carry more weight than pages of computer screed. They all knew that Morphy would be a formidable opponent in their times. Plus I'd wager that they would all love the opportunity to play him.

Nov-02-17
Premium Chessgames Member
  takchess: Aagaard in his Attacking Manual 1 and 2 has some interesting views on chess computer analysis. Worth checking out.
Jump to page #    (enter # from 1 to 54)
search thread:   
< Earlier Kibitzing  · PAGE 54 OF 54 ·  Later Kibitzing>

Take the Premium Membership Tour
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No posting personal information of members.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.


NOTE: Keep all discussion on the topic of this page. This forum is for this specific user and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of Chessgames.com, its employees, or sponsors.
Participating Grandmasters are Not Allowed Here!

You are not logged in to chessgames.com.
If you need an account, register now;
it's quick, anonymous, and free!
If you already have an account, click here to sign-in.

View another user profile:
  


home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | contact us
Copyright 2001-2017, Chessgames Services LLC