chessgames.com
Members · Prefs · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

 
Chessgames.com User Profile Chessforum
AylerKupp
Member since Dec-31-08 · Last seen Apr-25-15
About Me (in case you care):

Old timer from Fischer, Reshevky, Spassky, Petrosian, etc. era. Active while in high school and early college, but not much since. Never rated above low 1800s and highly erratic; I would occasionally beat much higher rated players and equally often lose to much lower rated players. Highly entertaining combinatorial style, everybody liked to play me since they were never sure what I was going to do (neither did I!). When facing a stronger player many try to even their chances by steering towards simple positions to be able to see what was going on. My philosophy in those situations was to try to even the chances by complicating the game to the extent that neither I nor the stronger player would be able to see what was going on! Alas, this approach no longer works in the computer age. And, needless to say, my favorite all-time player is Tal.

I also have a computer background and have been following with interest the development in computer chess since the days when computers couldn't always recognize illegal moves and a patzer like me could beat them with ease. Now itís me that canít always recognize illegal moves and any chess program can beat me with ease.

But after about 4 years (a lifetime in computer-related activities) of playing computer-assisted chess, I think I have learned a thing or two about the subject. I have conceitedly defined "AylerKupp's corollary to Murphy's Law" (AKC2ML) as follows:

"If you use your engine to analyze a position to a search depth=N, your opponent's killer move (the move that will refute your entire analysis) will be found at search depth=N+1, regardless of the value you choose for N."

Iím also a food and wine enthusiast. Some of my favorites are German wines (along with French, Italian, US, New Zealand, Australia, Argentina, Spain, ... well, you probably get the idea). One of my early favorites were wines from the Ayler Kupp vineyard in the Saar region, hence my user name. Here is a link to a picture of the village of Ayl with a portion of the Kupp vineyard on the left: http://en.wikipedia.org/wiki/File:A...

You can send me an e-mail whenever you'd like to aylerkupp(at)gmail.com.

And check out a picture of me with my "partner", Rybka (Aylerkupp / Rybka) from the CG.com Masters - Machines Invitational (2011). No, I won't tell you which one is me.

-------------------

Analysis Tree Spreadsheet (ATSS).

The ATSS is a spreadsheet developed to track the analyses posted by team members in various on-line games (XXXX vs. The World, Team White vs. Team Black, etc.). It is a poor man's database which provides some tools to help organize and find analyses.

I'm in the process of developing a series of tutorials on how to use it and related information. The tutorials are spread all over this forum, so here's a list of the tutorials developed to date and links to them:

Overview: AylerKupp chessforum (kibitz #843)

Minimax algorithm: AylerKupp chessforum (kibitz #861)

Principal Variation: AylerKupp chessforum (kibitz #862)

Finding desired moves: AylerKupp chessforum (kibitz #863)

Average Move Evaluation Calculator (AMEC): AylerKupp chessforum (kibitz #876)

-------------------

ATSS Analysis Viewer

I added a capability to the Analysis Tree Spreadsheet (ATSS) to display each analysis in PGN-viewer style. You can read a brief summary of its capabilities here AylerKupp chessforum (kibitz #1044) and download a beta version for evaluation.

-------------------

Chess Engine Evaluation Project

Some time ago I started but then dropped a project whose goal was to evaluate different engines' performance in solving the "insane" Sunday puzzles. I'm planning to restart the project with the following goals:

(1) Determine whether various engines were capable of solving the Sunday puzzles within a reasonable amount of time, how long it took them to do so, and what search depth they required.

(2) Classify the puzzles as Easy, Medium, or Hard from the perspective of how many engines successfully solved the puzzle, and to determine whether any one engine(s) excelled at the Hard problems.

(3) Classify the puzzle positions as Open, Semi-Open, or Closed and determine whether any engine excelled at one type of positions that other engines did not.

(4) Classify the puzzle position as characteristic of the opening, middle game, or end game and determine which engines excelled at one phase of the game vs. another.

(5) Compare the evals of the various engines to see whether one engine tends to generate higher or lower evals than other engines for the same position.

If anybody is interested in participating in the restarted project, either post a response in this forum or send me an email. Any comments, suggestions, etc. very welcome.

-------------------

Ratings Inflation

I have recently become interested in the increase in top player ratings since the mid-1980s and whether this represents a true increase in player strength (and if so, why) or if it is simply a consequence of a larger chess population from which ratings are derived. So I've opened up my forum for discussions on this subject.

I have updated the list that I initially completed in Mar-2013 with the FIDE rating list through 2014 (published in Jan-2015), and you can download the complete data from http://www.mediafire.com/view/4yin1...(complete).xlsx. It is quite large (116 MB) and to open it you will need Excel 2007 or later version or a compatible spreadsheet since several of the later tabs contain more than 65,536 rows.

The spreadsheet also contains several charts and summary information. If you are only interested in that and not the actual rating lists, you can download a much smaller (709 KB) spreadsheet containing the charts and summary information from here: http://www.mediafire.com/view/m5z97...(summary).xls. You can open this file with a pre-Excel 2007 version or a compatible spreadsheet.

FWIW, after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis and it goes something like this:

1. Initially (late 1960s and 1970s) the ratings for the strongest players were fairly constant.

2. In the 1980s the number of rated players began to increase exponentially, and they entered the FIDE-rated chess playing population mostly at the lower rating levels. The ratings of the stronger of these players increased as a result of playing weaker players, but their ratings were not sufficiently high to play in tournaments, other than open tournaments, where they would meet middle and high rated players.

3. Eventually they did. The ratings of the middle rated players then increased as a result of beating the lower rated players, and the ratings of the lower rated players then leveled out and even started to decline. You can see this effect in the 'Inflation Charts' tab, "Rating Inflation: Nth Player" chart, for the 1500th to 5000th rated player.

4. Once the middle rated players increased their ratings sufficiently, they began to meet the strongest players. And the cycle repeated itself. The ratings of the middle players began to level out and might now be ready to start a decrease. You can see this effect in the same chart for the 100th to 1000th rated player.

5. The ratings of the strongest players, long stable, began to increase as a result of beating the middle rated players. And, because they are at the top of the food chain, their ratings, at least so far, continue to climb. I think that they will eventually level out but if this hypothesis is true there is no force to drive them down so they will stay relatively constant like the pre-1986 10th rated player and the pre-1981 50th rated player. When this leveling out will take place, if it does, and at what level, I have no idea. But a look at the 2013 ratings data indicates that, indeed, it may have already started.

You can see in the chart that the rating increase, leveling off, and decline first starts with the lowest ranking players, then through the middle ranking players, and finally affects the top ranked players. It's not precise, it's not 100% consistent, but it certainly seems evident. And the process takes decades so it's not easy to see unless you look at all the years and many ranked levels.

Of course, this is just a hypothesis and the chart may look very different 20 years from now. But, at least on the surface, it doesn't sound unreasonable to me.

But looking at the data through 2014 it is even more evident that the era of ratings inflation appears to be over. The previous year's trends have either continued or accelerated; the rating for every ranking category, except for possibly the 10th ranked player (a possible trend is unclear), has either flattened out or has started to decline.

Any comments, suggestions, criticisms, etc. are both welcomed and encouraged.

-------------------

Chessgames.com Full Member

   AylerKupp has kibitzed 7732 times to chessgames   [more...]
   Apr-25-15 Shirov vs Judit Polgar, 1994 (replies)
 
AylerKupp: An even better game title would have been Shirov Me, Shirov Me Not.
 
   Apr-24-15 Gashimov Memorial (2015) (replies)
 
AylerKupp: <<dumgai> Yeah, when's the last time Carlsen had a minus score in a classical tournament?> If you look at this site, http://www.mark-weeks.com/aboutcom/... and if I didn't make a mistake while glancing at it, it looks like the last time was at the Oct-2010 Grand Slam ...
 
   Apr-22-15 Wesley So (replies)
 
AylerKupp: <Jim Bartle> So now not just <balolog> but <kalog> has been created to get around chessgame's two-post-a-day sanctions.> It's a good idea, if only it were so. I suspect that they are several posters which wish that it was so in my case. Unfortunately that may ...
 
   Apr-21-15 W So vs Akobian, 2015 (replies)
 
AylerKupp: <RookFile> Yes, So has class. And he moved on in the best possible way, scoring 5.5 out of his next 6 games. I hope that he and Akobian can become friends again.
 
   Apr-21-15 Robert James Fischer (replies)
 
AylerKupp: <Jim Bartle>, <Zonszein> Agreed, that's why I think that it will be quite a while before we have a Fischer Memorial tournament.
 
   Apr-19-15 Yehuda Malinarski
 
AylerKupp: <waustad> I think that I will change my user name to Ayler Brilliance Kupp so that I can legitimately say that "Brilliance is my middle name." That would definitely a case of blohardism.
 
   Apr-18-15 AVRO (1938) (replies)
 
AylerKupp: <<ughaibu> Ah, but it's always Kotov who wins. If that doesn't prove collusion, nothing does.> I don't know if it proves collusion, but it probably proves that Kotov was a pretty good player. :-) Or maybe he was the designated ...
 
   Apr-15-15 US Championships (2015) (replies)
 
AylerKupp: <<Gypsy> The arbiter acted within his prerogatives. I am yet to be convinced about that <fairness> bit.> What do you consider would have been a <fair> penalty after repeated violations of the rules even though he was informed of the consequences of ...
 
   Apr-10-15 Natalia Pogonina (replies)
 
AylerKupp: <<MagnusVerMagnus> EVERYONE here that knows and has played chess at a high level hopefully understands what Grandmaster GM means> Obviously you don't. Is that your way of saying that those that have those "other" grandmaster titles did not work their @ss off for those ...
 
   Apr-08-15 The World vs Naiditsch, 2014 (replies)
 
...
 
(replies) indicates a reply to the comment.

De Gustibus Non Disputandum Est

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 49 OF 49 ·  Later Kibitzing>
Feb-28-15
Premium Chessgames Member
  Tiggler: < I would think that if there is a 700-point rating differential between 2 players and the stronger player wins (which would be the likely case), then the fact that the rating differential is considered to be 400 points means that the winner will gain 0.91 rating points and if the full 700 point ratings differential is used the winner will gain only 0.17 points.>

I cannot follow your argument, because the correct number of ratings points in the case you describe is +0.8, instead of I guess <0.1. My opinions are based on simulations, using, guess what Excel. These show that if all players get, on average, results that reflect their ratings, then the distribution of ratings will spread over time, with the highest ratings climbing and the lowest falling. This is just due to the statistical random walk. In order to stabilize this effect, one has to assume that the higher rated player will underperform the rating difference by about 2%. This is sufficient to stabilize the distribution. The mechanism can be referred to as a mean-regression effect.

This is the simulation result if there is no ratings floor and no 400-point rule. Introduce the 400-point rule and the simulation blows up: the highest ratings march away and reach 4000 very fast after a few 1000 iterations.

Mar-04-15
Premium Chessgames Member
  AylerKupp: <Tiggler> Well, I obviously didn't make myself very clear, so let me try it again. You said earlier that there are other effects responsible for inflating the ratings than lowering the ratings floor, and that the most important of these is the 400-point rule. That surprised me a little bit since I didn't think that games between players with a greater than 400-point rating differential were all that common, particularly among the top players. So I decided to see the difference in rating points gained between 2 players with a (picked at random) 700-point differential.

I first tried using FIDE's on-line rating point calculator (http://ratings.fide.com/calculator_...) but, of course, that didn't help since it enforced the 400-point rule. I then used an old spreadsheet that I had lying around that (supposedly) calculated rating changes but that had an old formula and generated incorrect rating increases for the stronger players of +0.91 rating points if the 400-point rule was used and +0.17 rating points if the 400-point rule was not used. The correct numbers (after updating my spreadsheet) are, of course, +0.8 rating points if the 400-point rule was used and +0.1 rating points if the 400-point rule was not used. I didn't think that the 0.7 rating point differential would be significant in encouraging ratings inflation, particularly given the expectation that the number of games with a rating differential between the 2 players would be greater than 400 points.

But this was just a feeling, I had no data to back it up. And I don't have access to any databases that would allow me to query them for this information. The best I could do was to download the games of Carlsen, Anand, and Kasparov from the ChessTempo database (over 2.9M games) and see if my "feeling" was true for these 3 players. I discounted blitz, rapid, blindfold, and simultaneous exhibition games since these would not get included in their ratings. Here is what I found:

Carlsen: 1,630 total games, 1,103 classical games (67.7% of total), 10 classical games with a rating differential > 400 points (0.9% of classical games).

Anand: 2,918 total games, 2,038 classical games (69.8% of total), 16 classical games with a rating differential > 400 points (0.8% of classical games)

Kasparov: 1,928 total games, 1,531 classical games (79.4% of total), 26 classical games with a rating differential > 400 points (1.7% of classical games)

Of course, 3 players don't prove anything, but at least for these 3 players the % of classical games with a rating differential > 400 points is not high. But that still leaves unanswered the question as to whether this percentage, no matter how small, significantly contributes to ratings inflation.

Your simulation sounds interesting. You didn't say what your initial player rating distribution was but, if you started all the layers with the same rating, I would think that if you ran it for several iterations their ratings would tend to diverge since after the first iteration their ratings would no longer be the same and so their scoring probability would be different. And I can understand why introducing the 400-point rule would cause the simulation to blow up; the rating points won by the higher-rated player once the player rating differential would be more than they would be deserved by the true rating differential, and the greater the differential the more undeserved rating points the higher-rated player would get. This certainly sounds like a recipe for instability.

I don't know what the motivation is for having the 400-point rule. One source I found indicated I that it has only been in effect since 2009 and that prior to that a 350-point rule was in effect from 2006-2008, and that it was proposed by Anand. But here, http://en.chessbase.com/post/elo-od..., John Nunn is quoted as saying that he believes that it originated in the 1980s as a result of a phenomenon that, as a result of large rating differentials and calculating ratings based on the average rating of the players in the tournament, caused players to lose rating points even by winning games. But it seems to me that revising the ratings after each game is played without waiting until the end of the tournament solves that problem, and that is easily done with the ready availability of computers.

Mar-06-15
Premium Chessgames Member
  Tiggler: It does not surprise me that games with 400+ ratings differences are not often played by the the top players. Probably in Olympiads they play such games occasionally. Most such games are played in open Swiss tournaments, which are the bread and butter for GMs in the 2400-2600 range. I would guess it is not uncommon for up to half their games in such competitions to be against weak opponents. Some might play 100 games like that in a year, and that means that the 400-point rule could contribute substantially to their rating. Then they play a few games against 2700+ opponents and pass the bounty on.
Mar-06-15
Premium Chessgames Member
  AylerKupp: The more I think about it the more I think that there is no reason for having a 400-point (or any other point) rule. The original reason for having an XXX-point rule was presumably to prevent a player from losing rating points even though winning games against lower rated opponents. This could presumably happen if a player's new rating was calculated at the end of a tournament using the average rating of his opponents at the tournament.

However, the reason for only calculating a player's new rating at the end of a tournament was because of the limited availability of computing resources. Now that computing resources are readily available and a player's new rating is calculated after every game, it is not possible for a player to lose rating points if they win. The worst that could happen, if a player is rated 800 points or more than his opponent, is that the higher-rated player will not gain any points because the P(Win) for a rating differential of 800 points or larger is 1.0.

GM Bartlmiej Macieja's ACP Report for the 2008 FIDE congress (http://www.fide.com/component/conte...) said exactly that:

"According to the present rules, a difference in rating of more than 350 points shall be counted for rating purposes as though it were a difference of 350 points. The rule played a very important role when rating changes were calculated based on the average rating of opponents. Nowadays, when all games are counted separately, it has lost its statistical value. There was a discussion if to leave it as it is or to abolish it completely, eventually a kind of compromise has been agreed and the value of 350 has been substituted by 400."

So there must be another reason why FIDE wants to keep an XXX-point rule. All I can think of is along the lines of <Appaz>'s "alien inspired" (Magnus Carlsen (kibitz #80387)) changes in the Elo system. Perhaps FIDE has decided that higher ratings will increase interest in the game and if an XXX-point rule encourages ratings inflation, then it should be retained.

But the 350-point rule was not instituted by FIDE until April 1, 2006 and the 400-point rule was not instituted by FIDE until July 1, 2009. And my historical ratings data shows that ratings inflation for the 10th ranked player started in 1986 and the ratings inflation for lower-ranked players started even earlier, so the XXX-point rule could not have had any effect in ratings inflation at these earlier dates. Furthermore, the rate of inflation for the 10th through 200th ranked players did not change from 2006 through 2011, and the rate of inflation for the 500th through 5000th ranked players (except for the 1000th-ranked player) had already reversed itself prior to 2006. So I can only conclude that the XXX-point rule has not had a significant effect in contributing to ratings inflation.

Maybe FIDE should look at its own data.

Mar-06-15
Premium Chessgames Member
  Tiggler: The reason that I read somewhere was explicitly stated to be a political one: to encourage strong players to participate in open events.
Mar-10-15
Premium Chessgames Member
  AylerKupp: That makes sense. With the 350/400 rule in effect the strongest player stands to gain more rating points by beating weaker players (and lose less points if they draw or lose) since for ratings points won/lost calculations the ratings difference is capped at 350/400 points.

Still, I'm not sure how significant this is. For example, Nakamura won the Masters Section of the recent Tradewise Gibraltar tournament held in Jan-2015 that was open to all with the fine score of 8.5/10. Per the FIDE Jan-2015 rating list two of his opponents were rated more than 400 points below Nakamura. Per my simplified calculations (I used a constant rating for each player throughout the tournament rather than recalculating each player's rating after each game) and with the 400-point rule in effect, Nakamura gained 12.6 rating points, raising his rating from 2776 at the start of the tournament to 2789 at the end. But if I had not used the 400-point rule Nakamura would have gained 11.6 rating points, only one rating point less.

Of course this is just one tournament but, if it is somewhat characteristic of open tournaments where top players participate, then I don't know how much incentive a potential higher rating increase of this magnitude would be for top players to compete in more open tournaments, at least when compared with the incentive (n this case) of a first place prize of 20,000 euros.

Since 1986 there has been a fairly linear average rating point increase of a 5.24 rating points/year for the top level (10th-ranked) player. Prior to the implementation of the 350-point rule (1986-2005) the fairly linear average rating point increase was 5.65 rating points/year for the 10th-ranked player. Following the implementation of the 350-point rule (2006-2008), the fairly linear average rating point increase was 9.33 rating points/year for the 10th-ranked player. And since the implementation of the 400-point rule (2009-2014) there has been a fairly linear average rating point increase of a 6.00 rating points/year for the top level (10th-ranked) player, fairly close to the non-350/400-point rule situation of 1986-2005. So, if Tradewise Gibraltar 2015 was representative of the number of points gained by the tournament winner, I don't think that 400-point rule has been all that significant in terms of ratings inflation at the top level.

Mar-13-15
Premium Chessgames Member
  Tiggler: I wrote my simulation in order to try to settle such speculations. The main problem with it is that one has to assume some distribution of the ratings differences that will occur in the games. Obviously this also depends on the starting rating of the players. All I did last time I ran it was to assume random pairing: each player's opponent is chosen at random from the entire population. Obviously that's not realistic.
Mar-13-15
Premium Chessgames Member
  AylerKupp: I just updated my ratings spreadsheets to include the number of players in each rating interval from 1000 to 2800 in 100-point intervals (previously I had only included the number of players in each rating interval from 2200 to 2800). You can download the summary version (that's all you need unless you want the complete version which includes every yearly FIDE rating list since 1966) from the links in my forum's headers. This would enable you (if you are properly motivated) to seed your simulation with the actual rating distributions. You can probably still assume a random pairing for each game although that is probably not realistic either since 2600-level players don't typically play 1600-level players except in Swiss tournaments and even then it is not random since after the first few rounds top-level (who likely have the top rankings at this point) play the other top-level and higher-ranked players in the later rounds). But maybe you can model this type of pairing also.
Mar-14-15
Premium Chessgames Member
  Tiggler: I did download your spreadsheet file a couple of years ago, and looked at the distribution. For 2012, the last year I had, the distribution is fit very well with a Gaussian with a mean 1892 and standard deviation 262.

Before the floor was dropped, particularly before 2003 when the floor was 1800, it had a severely distorting effect, and probably drove inflation that continued to affect the highest ratings until recently. The floor has almost no effect now, because only a very small percentage of ratings are close to it.

Mar-15-15
Premium Chessgames Member
  Tiggler: The pairings distribution would look something like a covariance matrix showing the probability of a game pairing between any two specified ratings. This could be constructed from the FIDE records of rated games for each year. A lot of work to construct it.

If one had such a matrix, it might be possible to fit some model to it, so as to avoid having to look up each set of values when generating the pairings in a simulation. That would allow a powerful simulation that ought to answer most questions about inflation.

I'd do it for a fat fee: FIDE might even pay for it, but only if a high-profile commentator like Jeff Sonas were to propose it.

Mar-17-15
Premium Chessgames Member
  AylerKupp: Yeah, I know what you mean. Back in the 1990s the company I worked for was in competition for a software-intensive contract with Lockheed, prior to the Lockheed / Martin Marietta merger. I had worked on the software cost and schedule estimates using Barry Boehm's COCOMO model, but my management didn't think that the estimates would have any credibility coming from me. So they hired Barry Boehm himself to come up with the estimates. Barry Boehm, of course, turned over the exercise to one of his graduate students who, surprise, surprise, came up with the same numbers that I did. But perception is everything.

My latest spreadsheet looks pretty much the same as the one you downloaded a couple of years ago, with the ratings range extended downwards to 1000 and 2 additional years (2013, 2014) of data added. I had not looked at the ratings distribution but for 2014 itís a well-shaped bell curve with a mean of 12.5% at the 1800-1899 range, and only 3.3% of the players in the lowest 1000-1099 ratings range. This surprised me, I had not expected a normal distribution and thought that it would be an exponential or Gamma-like distribution with many more players rated 1000-1800 than 1900-2800. And at first I didn't know what to make of it.

But after a brief thought I think that this because a ratings floor of 1000 has only been in effect for 3 years (2012-2014), and it takes time for a group of players to become rated at the lower levels of the ratings floor. So in 2012, the first year of the 1000-point rating floor, the mean was 14.9% and centered in the 1900-1999 ratings range, one ratings range higher than two years later but with only 1.1% of the players in the lowest 1000-1099 ratings range.

I saw a similar effect in the years 2005-2008, the years when there was a common (men's and women's) rating floor of 1400. All means were centered in the 2000-2099 ratings range. In 2005 the mean was 25.2% with only 0.03% of the players in the lowest 1400-1499 ratings range; in 2006 the mean was 23.9% but with 0.15% of the players in the lowest 1400-1499 ratings range; in 2007 the mean was 22.1% with 0.3% of the players in the lowest 1400-1499 ratings range; and in 2008 the mean was 17.0% but with 0.7% of the players in the lowest 1400-1499 ratings range. In other words, each successive year the distribution was skewed a bit more towards the ratings floor than the year before. So, just as I originally thought it would be, if the trend continues eventually the distribution would look like an exponential or Gamma distribution. But every time that the ratings floor is lowered the process starts all over.

This is clearly seen if we look at the common (i.e. men's and women's) ratings distributions for 1993-2001 (common distribution floor of 2000) and the men's only ratings distribution for the years 1970-1990. In the former the skewing effect to the left is clearly seen and in the latter the distribution for the years 1980-1990 (in 5-year increments) clearly show an exponential-like distribution. So that makes sense; as more and more players enter the ratings pool the preponderance of players will be at the lower rating ranges. What is important to remember is the amount of time that it takes for this to happen, a minimum of 10 years or so. So, when it comes to ratings inflation, it's important to take the long view.

Mar-17-15
Premium Chessgames Member
  AylerKupp: As far as generating the covariance matrix, I wouldn't know how to access FIDE's records of rated games for each year. I would hypothesize that at the higher ratings levels the players mostly play in round robin tournaments with only an occasional participation in a Swiss-type tournament. As the players' ratings drop, their participation in round robin tournaments quickly drops so that by the time we look at the mid- and lower-rated player their involvement is pretty much exclusively in Swiss tournaments. I suppose this could be modeled; say a 95%/5% round robin/Swiss tournament involvement at the 2600 level and above, 80%/20% round robin/Swiss tournament involvement at the 2400-2600 level, down to 0%100% round robin/Swiss tournament involvement at the medium and lower rating levels. Then you could have round-robin and Swiss tournament models, populate them with the types of rating distributions expected for each type at the various levels, and calculate the expected results.

Like you said, a lot of work, and I am not sure that even a fat fee would tempt me to try to do it.

Mar-17-15
Premium Chessgames Member
  Tiggler: Actually, on reflection, I realize that a fat fee would turn it into drudgery, whereas as an amateur I might do it for fun!
Mar-21-15
Premium Chessgames Member
  cro777: Did Capablanca mis-assess this rook endgame?


click for larger view

White to move

Capablanca: "White here had a simple win by Rc7+, but played instead f6. Black now has a way to draw."

Orrin Frink: ""Most of Capablanca's analysis is quite wrong, due to the fact that he does not realise that White cannot win with a BP and RP if Black's king gets over to the kingside."

Edward Winter: "Are the analytical and theoretical observations of Orrin Frink correct?"

What would FinalGen say?

http://www.chesshistory.com/winter/...

(Chess Notes by Edward Winter, 9177. Marshall v Rosenthal)

Mar-26-15
Premium Chessgames Member
  AylerKupp: <cro777> FinalGen indicates that from the position you listed that White wins after either 1.Rb7 or 1.h4. White only has a draw after 1.Rc7+, so Capablanca was wrong on this one.

Unfortunately the FinalGen user interface is not very forgiving; I pressed one wrong button and lost all the analysis. But it took "only" about 6 hours to calculate it, so I will re-run the analysis tonight and report on the winning lines tomorrow. Assuming, of course, that I don't make another mistake and lose the analysis again.

Mar-26-15
Premium Chessgames Member
  cro777: <AylerKupp> It will be interesting to compare the resulting lines with Capablanca's and Frink's analysis. Looks like a relatively simple endgame, but it was a hard nut to crack even for Capablanca.
Mar-26-15
Premium Chessgames Member
  cro777: Capablanca, Chess Fundamentals: Endings with One Rook and Pawns

"Endings of one Rook and pawns are about the most common sort of endings arising on the chess board. Yet though they do occur so often, few have mastered them thoroughly. They are often of a very difficult nature, and sometimes while apparently very simple they are in reality extremely intricate. Here is an example from a game between Marshall and Rosenthal in the Manhattan Chess Club Championship Tournament of 1909-1910.


click for larger view

White to move

In this position Marshall had a simple win by 1.Rc7+, but played 1.f6, and thereby gave Black a chance to draw. Luckily for Marshall, Black did not see the drawing move, played poorly, and lost. Had Black been up to the situation he would have drawn by playing 1...Rd6."

Mar-26-15
Premium Chessgames Member
  cro777: < FinalGen indicates that White wins after either 1.Rb7 or 1.h4. White only has a draw after 1.Rc7+, so Capablanca was wrong on this one.>

Marshall - Rosenthal, Manhattan Chess Club Championship Tournament 1909-1910. (Marshall won this Championship).

White to move.


click for larger view

Analysis by Stockfish 6: d=52

1. (1.87): 1.Rb7 Rd5 2.f6 Rg5+ 3.Kf1 Rf5 4.f7 Kd6 5.h4 Ke6 6.Ke2 Re5+ 7.Kf3 Rf5+ 8.Kg3 Rxf7 9.Rxb4

2. (1.69): 1.Re7 Rd5 2.f6 Rf5 3.f7 Kd6 4.Rb7 Ke6 5.h4 Rxf7 6.Rb6+ Kf5 7.Rxb4 Rc7

3. (1.51): 1.Rc7+ Kd6 2.Rb7 Ke5 3.Rxb4 Kxf5 4.h4 Rc3 5.f3 Rc6 6.Kg3 Kf6 7.Kg4 Rc1

4. = (0.06): 1.h4 Rd6 2.Kg3 Rb6 3.Rc7+ Kd6 4.Rc1 b3 5.Kf4 b2 6.Rb1 Rb4+ 7.Kg5 Ke7

Mar-27-15
Premium Chessgames Member
  AylerKupp: <cro777> After re-running the analysis overnight I still couldn't get FinalGen to list the move sequences after 1.Rb7 or 1.h4. So I re-read the documentation ("When all else fails ...", etc.). It seems that there is a subtlety that I missed. The actual description of both 1.Rb7 and 1.h4 is "White wins or Draw". Since I don't really use FinalGen all that much, I foolishly assumed that this meant that in some lines White wins in spite of best play by Black and in some other lines (with less than best play by White), Black can hold the draw. This would not really be surprising.

But, noooooo. According to a response that I received from FinalGen's developer on a similar question some time ago regarding the use of Search for Draw mode, "The result is always the theoretical one, that is, considering the best moves from both sides 'White wins or draw' just means that, with a perfect play from both sides, Black cannot win the game, but the exact result cannot be determined by FinalGen. So you should consider that result as the same as 'UNSOLVED'".

There are many possible reasons, I suppose, for the position to be considered Unsolved. One likely possibility in this case is a pawn promotion by one or both sides. FinalGen does not support analyses of more than one piece by either side so, even if the pawn was immediately captured, FinalGen would just throw up its hands (figuratively, of course) and indicate that it can't solve the position. Which is a shame since, if one side were to queen and the other side could be prevented from queening, the likely result would be a win by the first side, and it seems that FinalGen could make that determination.

So, unfortunately, the answer to your original question, "What would FinalGen say?" is disappointing, and we still don't have a definitive answer whether White has a win from this position or not. Oh well, no use criticizing your tools, particularly the ones you can get for free.

FWIW, this is FinalGen's full evaluation of the initial position. Too bad that I can't include a screen shot of the results.

Rb7 White wins or Draw
h4 White wins or Draw
Rh8 Draw
Ra7 Draw
Rc7+ Draw
Re7 Draw
Rf7 Draw
Rg7 Draw
f6 Draw
h3 Draw
Kf1 Draw
f4 Draw
f3 Draw
Rh4 Black wins or Draw
Rh5 Black wins or Draw
Rh6 Black wins or Draw
Kh1 Black wins or Draw
Kg1 Black wins or Draw

Mar-27-15
Premium Chessgames Member
  cro777: <AylerKupp> FinalGen's results are not quite disappointing. At least we have a partial answer: 1.Rc7+ (Capablanca) is a draw. We also know that 1.Rb7 is White's best try (1.h4 is met with 1...Rd6).

We might be closer to a complete answer by further analysing (backsolving) the following line:

1.Rb7 Rd5 2.f6 Rg5+ 3.Kf1 Rf5 4.f7 Kd6 5.h4 Ke6 6.Ke2 Re5+ 7.Kf3 Rf5+ 8.Kg3 Rxf7 9.Rxb4 Draw


click for larger view

Mar-27-15
Premium Chessgames Member
  Tiggler: <cro777> That's is a good suggestion, and perhaps the best way to make progress at the moment. It is somehow pleasing that such a seemingly simple position remains unsolved after 100 years. Human effort, assisted by machines, is still required.
Mar-31-15
Premium Chessgames Member
  AylerKupp: <chessmoron> Hopefully you will read this. It's been almost 4 years since the last Masters vs. Machines tournament and I hope that there is some interest among the previous participants or new participants about a next tournament. I would certainly be interested in participating.

There has been a lot of improvements in both hardware and software in the last 4 years. I would be interested in again participating since I still have (and use!) the same 32-bit computer that I had 4 years ago as well as the same version of Rybka. It would be interesting as a benchmark to see how those two old warhorses (my 32-bit computer running Rybka 4.1) do against 64-bit, 8-core, 16 (or more) GByte RAM machines running much more modern engines like Stockfish 6, Houdini 4, and Komodo 8.

So, if you're interested in organizing a similar event in the near future, count me in and let me know. We will have to find a different venue since the Yahoo Chess Portal has been shut down, but I am sure that there are other equally good sites that we can use.

Apr-15-15
Premium Chessgames Member
  Tiggler: <AylerKupp> How come you want to make this personal?

W So vs Akobian, 2015

<Have you read the FIDE Laws of Chess? If you haven't I suggest that you do so.>

If you want to post insults again do it on my forum

Apr-20-15
Premium Chessgames Member
  centralfiles: Hello <aylerkupp> I'm using my forum for opening analysis I started with Ruy Lopez Riga variation-Sidelines. Would want to hear your opinion(I Know of you from "The world" games) if your interested. Thanks.
Apr-20-15
Premium Chessgames Member
  centralfiles: I know of only a few reasons for rating inflation

1.Fide arbitrarily adding points to certain players(I don't know the details) 2.rating floors, this is not a reason for continued inflation once the floors have been in place for awhile 3.the 400 point rule whose effects I just read about in tiggler's post, again this should not be a reason for continued inflation. 4.Simply the fact that there are more players, in a simple Gaussian curve the extremes will deviate more as the sample rises. I find this last point most interesting as It hard to figure out a way to correct for it. (I'm sure it's possible though using some equation that takes the sample size into account) Are there a bunch of other reasons I'm not aware of?

Jump to page #    (enter # from 1 to 49)
< Earlier Kibitzing  · PAGE 49 OF 49 ·  Later Kibitzing>

Advertise on Chessgames.com
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other users.
  4. Nothing in violation of United States law.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.


NOTE: Keep all discussion on the topic of this page. This forum is for this specific user and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of Chessgames.com, its employees, or sponsors.
Participating Grandmasters are Not Allowed Here!

You are not logged in to chessgames.com.
If you need an account, register now;
it's quick, anonymous, and free!
If you already have an account, click here to sign-in.

View another user profile:
  


home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | advertising | contact us
Copyright 2001-2015, Chessgames Services LLC
Web design & database development by 20/20 Technologies