chessgames.com
Members · Prefs · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

 
Chessgames.com User Profile Chessforum
AylerKupp
Member since Dec-31-08 · Last seen Sep-02-15
About Me (in case you care):

Old timer from Fischer, Reshevky, Spassky, Petrosian, etc. era. Active while in high school and early college, but not much since. Never rated above low 1800s and highly erratic; I would occasionally beat much higher rated players and equally often lose to much lower rated players. Highly entertaining combinatorial style, everybody liked to play me since they were never sure what I was going to do (neither did I!). When facing a stronger player many try to even their chances by steering towards simple positions to be able to see what was going on. My philosophy in those situations was to try to even the chances by complicating the game to the extent that neither I nor the stronger player would be able to see what was going on! Alas, this approach no longer works in the computer age. And, needless to say, my favorite all-time player is Tal.

I also have a computer background and have been following with interest the development in computer chess since the days when computers couldn't always recognize illegal moves and a patzer like me could beat them with ease. Now it’s me that can’t always recognize illegal moves and any chess program can beat me with ease.

But after about 4 years (a lifetime in computer-related activities) of playing computer-assisted chess, I think I have learned a thing or two about the subject. I have conceitedly defined "AylerKupp's corollary to Murphy's Law" (AKC2ML) as follows:

"If you use your engine to analyze a position to a search depth=N, your opponent's killer move (the move that will refute your entire analysis) will be found at search depth=N+1, regardless of the value you choose for N."

I’m also a food and wine enthusiast. Some of my favorites are German wines (along with French, Italian, US, New Zealand, Australia, Argentina, Spain, ... well, you probably get the idea). One of my early favorites were wines from the Ayler Kupp vineyard in the Saar region, hence my user name. Here is a link to a picture of the village of Ayl with a portion of the Kupp vineyard on the left: http://en.wikipedia.org/wiki/File:A...

You can send me an e-mail whenever you'd like to aylerkupp(at)gmail.com.

And check out a picture of me with my "partner", Rybka (Aylerkupp / Rybka) from the CG.com Masters - Machines Invitational (2011). No, I won't tell you which one is me.

-------------------

Analysis Tree Spreadsheet (ATSS).

The ATSS is a spreadsheet developed to track the analyses posted by team members in various on-line games (XXXX vs. The World, Team White vs. Team Black, etc.). It is a poor man's database which provides some tools to help organize and find analyses.

I'm in the process of developing a series of tutorials on how to use it and related information. The tutorials are spread all over this forum, so here's a list of the tutorials developed to date and links to them:

Overview: AylerKupp chessforum (kibitz #843)

Minimax algorithm: AylerKupp chessforum (kibitz #861)

Principal Variation: AylerKupp chessforum (kibitz #862)

Finding desired moves: AylerKupp chessforum (kibitz #863)

Average Move Evaluation Calculator (AMEC): AylerKupp chessforum (kibitz #876)

-------------------

ATSS Analysis Viewer

I added a capability to the Analysis Tree Spreadsheet (ATSS) to display each analysis in PGN-viewer style. You can read a brief summary of its capabilities here AylerKupp chessforum (kibitz #1044) and download a beta version for evaluation.

-------------------

Chess Engine Evaluation Project

Some time ago I started but then dropped a project whose goal was to evaluate different engines' performance in solving the "insane" Sunday puzzles. I'm planning to restart the project with the following goals:

(1) Determine whether various engines were capable of solving the Sunday puzzles within a reasonable amount of time, how long it took them to do so, and what search depth they required.

(2) Classify the puzzles as Easy, Medium, or Hard from the perspective of how many engines successfully solved the puzzle, and to determine whether any one engine(s) excelled at the Hard problems.

(3) Classify the puzzle positions as Open, Semi-Open, or Closed and determine whether any engine excelled at one type of positions that other engines did not.

(4) Classify the puzzle position as characteristic of the opening, middle game, or end game and determine which engines excelled at one phase of the game vs. another.

(5) Compare the evals of the various engines to see whether one engine tends to generate higher or lower evals than other engines for the same position.

If anybody is interested in participating in the restarted project, either post a response in this forum or send me an email. Any comments, suggestions, etc. very welcome.

-------------------

Ratings Inflation

I have recently become interested in the increase in top player ratings since the mid-1980s and whether this represents a true increase in player strength (and if so, why) or if it is simply a consequence of a larger chess population from which ratings are derived. So I've opened up my forum for discussions on this subject.

I have updated the list that I initially completed in Mar-2013 with the FIDE rating list through 2014 (published in Jan-2015), and you can download the complete data from http://www.mediafire.com/view/cmc4x...(complete).xlsx. It is quite large (116 MB) and to open it you will need Excel 2007 or later version or a compatible spreadsheet since several of the later tabs contain more than 65,536 rows.

The spreadsheet also contains several charts and summary information. If you are only interested in that and not the actual rating lists, you can download a much smaller (596 KB) spreadsheet containing the charts and summary information from http://www.mediafire.com/view/2b3id...(summary).xls. You can open this file with a pre-Excel 2007 version or a compatible spreadsheet.

FWIW, after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis and it goes something like this:

1. Initially (late 1960s and 1970s) the ratings for the strongest players were fairly constant.

2. In the 1980s the number of rated players began to increase exponentially, and they entered the FIDE-rated chess playing population mostly at the lower rating levels. The ratings of the stronger of these players increased as a result of playing weaker players, but their ratings were not sufficiently high to play in tournaments, other than open tournaments, where they would meet middle and high rated players.

3. Eventually they did. The ratings of the middle rated players then increased as a result of beating the lower rated players, and the ratings of the lower rated players then leveled out and even started to decline. You can see this effect in the 'Inflation Charts' tab, "Rating Inflation: Nth Player" chart, for the 1500th to 5000th rated player.

4. Once the middle rated players increased their ratings sufficiently, they began to meet the strongest players. And the cycle repeated itself. The ratings of the middle players began to level out and might now be ready to start a decrease. You can see this effect in the same chart for the 100th to 1000th rated player.

5. The ratings of the strongest players, long stable, began to increase as a result of beating the middle rated players. And, because they are at the top of the food chain, their ratings, at least so far, continue to climb. I think that they will eventually level out but if this hypothesis is true there is no force to drive them down so they will stay relatively constant like the pre-1986 10th rated player and the pre-1981 50th rated player. When this leveling out will take place, if it does, and at what level, I have no idea. But a look at the 2013 ratings data indicates that, indeed, it may have already started.

You can see in the chart that the rating increase, leveling off, and decline first starts with the lowest ranking players, then through the middle ranking players, and finally affects the top ranked players. It's not precise, it's not 100% consistent, but it certainly seems evident. And the process takes decades so it's not easy to see unless you look at all the years and many ranked levels.

Of course, this is just a hypothesis and the chart may look very different 20 years from now. But, at least on the surface, it doesn't sound unreasonable to me.

But looking at the data through 2014 it is even more evident that the era of ratings inflation appears to be over. The previous year's trends have either continued or accelerated; the rating for every ranking category, except for possibly the 10th ranked player (a possible trend is unclear), has either flattened out or has started to decline.

Any comments, suggestions, criticisms, etc. are both welcomed and encouraged.

-------------------

Chessgames.com Full Member

   AylerKupp has kibitzed 8233 times to chessgames   [more...]
   Sep-02-15 Robert James Fischer (replies)
 
AylerKupp: <TheFocus> Actually, Fischer's first thought would probably be "What's an LOL?"
 
   Sep-02-15 Team White vs Team Black, 2015
   Sep-01-15 Carlsen vs Nakamura, 2015 (replies)
 
AylerKupp: <<Petrosianic> But there was still one holdout. Finally, as it got closer to the 50 move rule, the eval suddenly dropped from 3.1 to about .67, and the last engine slave in the room shouted "Carlsen blundered!" Classic.> Yes, that was a classic. It still makes me ...
 
   Sep-01-15 Golden Executive chessforum (replies)
 
AylerKupp: Aaargh! I forgot to post my predictions for ROund 9! Not that they are of interest to anyone, myself included.
 
   Sep-01-15 Sinquefield Cup (2015) (replies)
 
AylerKupp: <<devere> During the Norway tournament they were posting about some guy called "Lemon Erronian".> If Aronian draws today then I doubt that anyone will be saying that anymore, or at least not until the next major tournament. Isn't it nice to have the last laugh?
 
   Sep-01-15 AVRO (1938) (replies)
 
AylerKupp: <<offramp> Am I alone in thinking that Kasparov is a better politician than he is a chess player?> You probably aren't. But you and others who share the same opinion should consider that if Kasparov was a better politician than he is a chess player he would be the ...
 
   Aug-30-15 Stockfish (Computer) (replies)
 
AylerKupp: <<NeverAgain> d=40 seems to yield good results indeed, but at 30min to one hour per move its practical usefulness is limited.> Oh, I forgot to mention what to me was so second nature that I forgot. I don't use Stockfish or any other engine to play games at classical ...
 
   Aug-29-15 Aronian vs Carlsen, 2015 (replies)
 
AylerKupp: So looks lost after 27.Rd1 Rxf3.
 
   Aug-28-15 Robert E Byrne vs Fischer, 1963 (replies)
 
AylerKupp: <<maxi> Basically I don't trust any analysis over 14 or so plys.> A lot depends on what you mean by "trust". Will the game proceed along the lines that the engine indicated as being best by both sides? No, or at least not likely, even if the same version of the engine ...
 
   Aug-25-15 Carlsen vs Topalov, 2015 (replies)
 
AylerKupp: <Nonnus> I can speak from recent personal experience that responding to a post without reading it and understanding it properly tesults in a much more spirited follow-up discussion.
 
(replies) indicates a reply to the comment.

De Gustibus Non Disputandum Est

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 51 OF 51 ·  Later Kibitzing>
Apr-28-15
Premium Chessgames Member
  AylerKupp: <<Tiggler> The problem with this is that one ought to look at a certain percentile rating, say 99th percentile, not the rating for a given numerical rank.>

I graphed both the Percentiles and Quartiles earlier and the 90th percentile player's rating decreased steadily until 1992 (the 10th ranked player's rating was roughly constant through 1986) and then began a steady increase until 2007 when it took a steep jump and then flattened out through 2014. And this jump occurred for <all> percentiles, with a smaller jump the lower the percentile (don't ask me why). There was no ratings jump for the 10th ranked player.

Per you suggestion I plotted the rating of the 99th percentile player and it behaved similarly to the 90th percentile player, except more accentuated. For the 99th percentile player the ratings decline lasted until 1984 where it began a steady increase until it again took a steep jump in 2007 and a smaller jump in 2008, and then it began a slight but steady increase from 2009 through 2014. This is not too dissimilar to the line for the 10th rated player whose rating increase began to slow down and perhaps even start a downturn in 2013.

I've updated the spreadsheets in MediaFire and the links to them in this forum's header, so you can download the summary version if you wish and compare the ratings changes of the 99th percentile player and the 10th ranked player in the 'Inflation Charts' tab. Although the MediaFire site seems to be down for the moment.

And, yes, I would also expect that the rating extremes would change if the population increases, except that's not what happened. From 1970 through 1986 the rating of the 10th player was relatively constant even though the number of rated players increased from 547 to 5,200, with the ratings floor constant. Then the rating of the 10th player began to increase at a roughly constant rate even though the ratings floor remained at 2200 until 1993. And the rate of increase remained roughly constant in spite of successive rating floor reductions. So I think that there are more complicated factors at work.

Apr-28-15
Premium Chessgames Member
  AylerKupp: <centralfiles> I think that I used the term "quantity" in a misleading way. I should have said "item" so that there was no implication that the reason that the value of the item changed was because there was a change in the quantities of the item. Cars are an example. Their prices keep going up but there have been many improvements in them; better mileage, higher reliability and safety, more features (although not all of them necessarily good!), etc. So even if you are paying more for this item, you are getting more value for your money. Whether you are getting enough value for the price increase is a different issue.

As I mentioned to <Tiggler> above we could assume that inflation is principally the result of more players but then I would think that the would be roughly proportional to the number of players and that's not what the data shows. And since the <total> number of active players continues to increase seemingly exponentially I would suspect that the ratings would also continue to increase, even if not exponentially. But, again, that's not what the data shows; the ratings increase for the Nth player have apparently stabilized (at least for the 10th – 200th rated player) or even started to decline (for the 500th – 5000th rated player). Again, not what I would expect if there was a simple relationship between the ratings and the size of the population.

And while an inaccurate initial rating for a large sample of players might be unlikely statistically, that is exactly what has been happening as a result of repeated lowering of the ratings floor, from 2200 in 1992 to 1000 in 2012. That's why the active rated player population is increasing exponentially; it's not that the number of players playing chess is increasing exponentially, it's that the number of <rated> players is probably increasing exponentially. There are many more players rated in the 1000's than players rated in the 2200's!

And whenever a new player enters the rated population their initial ratings are going to be inaccurate. So as more and more players enter the rated population we will get a large number of initially inaccurately rated players. Hopefully FIDE will not lower the rating floor beyond the current 1000 and so the number of inaccurately rated players entering the population each year should be relatively small compared to the number of players entering the rated player population as a result of lowering the rating floor and therefore fairly stable. Then we'll see.

As far as the long lag time, what can I say? That's what the data shows; whenever the ratings floor is unchanged the ratings distribution and its mean become more and more skewed to the left (the lower rated region), but it takes time for the effect to show itself. Most players don't play that many rated games in one year so maybe that's the reason for the unexpected large lag.

Apr-28-15
Premium Chessgames Member
  Tiggler: <AylerKupp> Interesting results that you reported above. It will take a little while to digest them ...
Apr-28-15
Premium Chessgames Member
  centralfiles: Inaccurate ratings for new players is understandable, but why would they all be overrated shouldn't they average each other out?

Thanks for all the data, I can't add much.

Apr-29-15
Premium Chessgames Member
  Tiggler: <centralfiles: ...
Thanks for all the data, I can't add much.>

I might be able to add a bit, but no subtraction. As for multiplication and derision, nothing to say.

Apr-29-15
Premium Chessgames Member
  AylerKupp: <<centralfiles> Inaccurate ratings for new players is understandable, but why would they all be overrated shouldn't they average each other out?>

I don’t think that I said overrated but, just to make sure, I used the Search Kibitzing feature and the last time that I used "overrated" in a post was on Jun-04-14. :-) But perhaps I implied it (I haven't' been too good with words recently, as <Tiggler> can attest). But I do believe that the following scenario is possible:

1. The ratings floor is lowered and "a lot" of players enter the lowest rung of the rating pool.

2. Because initial ratings are not necessarily accurate, some of these players (roughly half if a sufficiently large number of players are involved since then the expected statistical results become more likely) will be overrated and the remainder will be underrated.

3. As these players play each other, the approximate half that is underrated will mostly beat those that are overrated, gain rating points, and stay in the rating population; the other approximate half that is overrated will lose rating points <and fall below the ratings floor>. Therefore, after some time, the distribution at the lowest rung of the rated pool will not be symmetrical between the number of underrated players and the number of overrated players; there will be a majority of initially underrated players that have gained rating points (perhaps too many).

4. The number that were originally highly underrated will gain enough rating points and rise to the next rung of the rated population and encounter properly rated players (think Swiss tournament pairings). A number of those will continue to become stronger and see their ratings increase, but I suspect that the majority will not and their ratings will decline once they start playing above their heads. And, because of the reduction in the K factor as a player plays more games, the decline in their ratings will take longer (approximately twice as much) as their initial increase in their ratings since the K factor drops from 40 to 20 once a player has played 30 games, unless they are under 18 and their rating is less than 2300.

This is what the data shows at the lowest rating levels; an initial increase in the ratings followed by a decline in the ratings. And as the cream rises to the top, the process is repeated at each higher level of rated players, but it happens seemingly slowly; there is a lag of several years before the rise and subsequent fall is apparent. But we may be seeing that effect already even at the highest levels.

Now, I realize that just because this scenario fits the data, it doesn't mean that it is THE scenario. Correlation does not imply causality. I'm sure that many other scenarios can be described that will also fit the data. But I do think that a hypothesis that fits the data has more chance of being correct that a hypothesis that doesn't fit the data!

Apr-29-15
Premium Chessgames Member
  AylerKupp: <<Tiggler> As for multiplication and derision, nothing to say.>

I don't know if you meant to say "division" instead of "derision" but, if the former, a great Freudian slip!

Apr-29-15
Premium Chessgames Member
  Tiggler: <AylerKupp> It was not original. I was referring to the Classics:

The Mock Turtle, in Alice in Wonderland, Chapter 9, describes his schooling in <the different branches of Arithmetic- Ambition, Distraction, Uglification and Derision>.

Apr-29-15
Premium Chessgames Member
  Tiggler: <AylerKupp> A brief response to your comment on FIDE Laws, that clarifies my POV, is on my forum. Peace.
May-11-15
Premium Chessgames Member
  cro777: <AylerKupp> Correspondence play and engines. What techniques can be used against the opponents who are simply relaying the best move suggested by engines?

Mark Weeks' series on chess engines in correspondence chess:

http://chessforallages.blogspot.com...

A summary of Mark's ideas with your comments would be useful for our next challenge.

May-12-15
Premium Chessgames Member
  cro777: "The influence of technological tools over the game of chess is controversial. Some think that chess players become robotic, lose all creativity and avoid taking any risk. The inevitable outcome is a lot of uninteresting games ending in a draw.

Others think that technological advances have made a huge amount of information available to chess players. Thus they can solve, within a short time, problems which were hitherto considered too complex. Today’s players have more resources to look for new creative ideas, and those emerge in abundance.

Computer-aided home analyses of top chess players leads to a reassessment of all old axioms, principles and evaluations. Hence one can easily understand why work with computers adds a new creative layer to the game."

https://www.everymanchess.com/downl...

May-12-15
Premium Chessgames Member
  AylerKupp: <cro777> Thanks for the links, I think. I say "I think" because there is a lot of information there and it will take me quite a bit of time to digest it and comment on it! Hopefully I can do so before the next correspondence game so that everyone will have ample time to ignore my thoughts and comments. :-)
May-12-15
Premium Chessgames Member
  kwid: Hello,
I am trying to find out if there is an interest in used chess books which served as my personal references. I still have a great variety of chess magazines also. The oldest book in perfect condition is dates back 1337/1438/1505 from Jacobus de Cessolis translated into German 1843 / 1870 reprinted 1956. But most books are recent issues and are written in English. I started to collect chess books since 1950 which are in German from P.Keres Theory der Schacheroeffnugen to A.Suitin Lehrbuch der Schachtheory and from I. Boleslawskias as examples. But since 1958 all books are in English and cover the period up to 2012.
May-12-15
Premium Chessgames Member
  AylerKupp: <cro777> With regards to "The influence of technological tools over the game of chess is controversial", that is certainly true. And I think that a lot of it has to do with players' and authors' ignorance of how computers work in general and how chess engines work in particular, as well as a failure to understand certain basic mathematical and engineering principles.

Take for example "A Chess Engine is NOT Your Friend!" (http://www.chess.com/article/view/c...) by IM Jeremy Silman. The second and third paragraphs say:

"A zillion people were using a zillion chess engines and they were all raving about how the grandmaster was in trouble."

"I found this funny, since the computer only had White a tiny bit ahead. (Why is the grandmaster in trouble if the number shows only a small plus?)" (The engine apparently evaluated the position at [+0.21])

I'm clearly biased but I think that whenever an author uses the world "zillion" he is automatically exaggerating to make a point, and his impartiality becomes immediately questionable. And to consider an evaluation of [+0.21] to be significant just shows that the people being referred to don't understand that such an evaluation is in the noise and effectively means that the position is even. They also don't understand the concept of significant digits, and it might be better if engines reported evaluations with a resolution of 0.5. Should computers in general and chess engines in particular be blamed because they don't know the lack of significance of an evaluation like [+0.21]? That's like saying that ¾ of the world's population should be considered illiterate because they might not be able to read and write English.

Mr. Silman later goes on to say that " CHESS ENGINES ARE OFTEN DETRIMENTAL TO THE CHESS HEALTH OF NON-MASTERS", further saying that "once the engine’s alarm bells go off, the innocent reader often views the concept as false, and all that he might have learned gets thrown out the window." Is that the fault of the engine or the fault of the "innocent reader".

Later on he discusses Fischer vs Reshevsky, 1966 which Fischer won partly (or perhaps largely) because Reshevsky didn't find 39...Rh8!! (found by Houdini). He does not consider the game any less beautiful and artistic in spite of Reshevsky not finding the drawing line, and I agree with him. But then he goes on to say that "When amateurs look at master games, the point is NOT to find errors, but to learn enough to appreciate the game’s beauty, and learn the lessons that eventually will allow you to create that same beauty in your games." I guess that I don't consider those two objectives to be mutually exclusive; one can learn enough to BOTH appreciate a game's beauty and learn the lessons that will eventually allow you to create the same beauty. But how can you do the latter if you don't find that your "beautiful" concept was in error?

His opinion and bias can be summarized in his last sentence: " As for Mr. Weski, please do yourself a favor and turn off the engine. You’ll be a better player if you do." Needless to say, I disagree with ALL his recommendations. He just doesn't know how to use chess engines to improve your chess and has a built-in bias against them because of his ignorance.

May-12-15
Premium Chessgames Member
  AylerKupp: <kwid> Thanks for thinking of me but, while interested, I probably wouldn't be able to make good use of them. I also have a significant chess book collection (but I suspect that it is nowhere as good as yours) and I don't have the time or ability to make much use of them. It reminds me at a restaurant I visited recently when, after a big multi-course dinner, the server asked me if I was interested in dessert. My answer: Interested, yes; capable, no."
May-12-15
Premium Chessgames Member
  cro777: Argumentum ad ignorantiam.

<I think that a lot of it has to do with players' and authors' ignorance of how computers work in general and how chess engines work in particular, as well as a failure to understand certain basic mathematical and engineering principles.>

The other component is the kowledge of the characteristics and processes of creativity in general, and creativity in chess in particular.

The widening of one's horizon is a necessary step to "understand why work with computers adds a new creative layer to the game."

May-13-15
Premium Chessgames Member
  AylerKupp: <<cro777> The widening of one's horizon is a necessary step to "understand why work with computers adds a new creative layer to the game.">

I agree, but unfortunately as far as this step is concerned we are all Luddites to a certain degree. Many apparently see computers as dehumanizing and tend to downplay any accomplishments made with their help, as though getting inferior results with computer assistance is somehow more "noble". I suppose that whenever a radical new technology that impacts our lives is introduced there is a time where that attitude prevails in many people, particularly those that do not fully understand it or, as Heinlein would say, "grok" it.

If these people would only realize that using computers in different and more effective ways is the same as exhibiting creativity in that particular domain and does not demean the final result, we will have moved forward a step or two.

May-13-15
Premium Chessgames Member
  morfishine: <AylerKupp> I wanted to clear something up. I always enjoy non-engine 'Brain Games'. I think those are the most rewarding. As for engine-assisted White vs Black games, those are fine too since these are fairly balanced.

The point I was trying to make is that WT games vs ANY individual do not interest me anymore since there is nothing to prove. IMO, there just isn't any challenge when hundreds of engines are lined up against any single GM.

So, I look forward to the next Brain Game and hope we end up on the same team!

*****

May-14-15
Premium Chessgames Member
  AylerKupp: <morfishine> I like both engine and non-engine games since they are very different in many ways and yet similar in so many others. They exercise different brain cells, at least for me.

One aspect of these team games that is not mentioned as much as I think it should be are the interactions with the various players and the efforts necessary to support your move preferences, particularly if you feel strongly about a particular move. The latter is both funny and sad; to see the emotional state that some players work themselves into in support for or antagonism against a particular move. But, after all, it's only a game, and I don't think that anyone's livelihood is dependent on whether their team wins, so the emphasis should be on having fun and enjoying yourself. Unfortunately some players seem to get carried away and spoil the game to some extent for many of us.

I agree with you that there isn't much to prove anymore when playing against a single GM. In the original Chessgames Challenge the question was: "Can a group of chess amateurs team up to beat a grandmaster?" The answer is clearly YES, at least with engine assistance. Any grandmaster regardless of their talent and experience, with their busy schedule and limited computer resources is going to have a hard time against a group of amateurs with lots of time and dedication and supported by an array of computers and the latest chess engines. So I've occasionally mentioned that the question should be reversed into something like "Can a grandmaster beat a group of dedicated amateurs armed with the latest chess technology?" Still, it would be great fun to play against, say, Carlsen or Nakamura, regardless of computer assistance don’t you think?

And, yes, I'm also looking forward to the next Battle of the Brains game. As far as being on the same team, be careful what you ask for. You might need to read all my verbose posts just in the rare case that I say something useful. :-)

Jun-07-15
Premium Chessgames Member
  AylerKupp: <Investigation of search tree pruning improvements in the last several years> (part 1 of 2)

In Kasparov vs Deep Blue, 1997 (kibitz #123), in response to a question by <Alan Vera> as to which computer system would win in a match, Komodo 9 or Deep Blue, I stated my opinioin that Komodo 9 would win easily because of its ability to prune its search tree much more effectively, thus compensating for its relatively much slower position evaluation time due to Deep Blue's special hardware assistance. I therefore ran an analysis of this critical position from The World vs Naiditsch, 2014 after 13...d5, when the World Team needed to decide between 14.e5 and 14.exd5:


click for larger view

The World Team narrowly chose 14.e5, 135 votes vs. 127 votes for 14.exd5, and the character of the game would have been very different if 14.exd5 had been chosen.

At any rate, I decided to see how quickly earlier versions of Komodo, Houdini, and Stockfish reached the various search depths compared to the latest versions of Komodo, Houdini, and Stockfish on the same computer, and thus see how search tree pruning efficiently had improved in the last few years. Of course, this wouldn't be the entire reason since evaluation function efficiency might also have been improved as well as the overall code.

The earliest versions of these engines I have are Komodo 1.3 (Jan-2011), Houdini 1.03 (Oct-2010), and Stockfish 1.91 (Nov-2010). The latest versions of these engines I have (which I believe are close to the latest versions available) are Komodo 9.01 (May-2015), Houdini 4 (June 2014), and Stockfish 6 (Jan-2015). Komodo was a little bit more complicated since up to version 5 it was a single-processor (SP) engine and only starting with version 5.1 did it become a multi-procesor (MP) engine, and I didn't think that it would prove much comparing the time to search depth between SP and MP engines. So I ran and compared analyses between Komodo 1.3 and Komodo 5, and between Komodo 5.1 and Komodo 9.01.

And, since I had some time on my hands, I also ran analyses of intermediate versions of Komodo and Stockfish; Komodo 3 (SP, Aug-2011), Komodo 7 (MP, Jun-2014), and Stockfish 3 (May-2013). I chose these intermediate versions so that their release dates would be roughly half-way between the earliest and latest engine versions, just to see the expected improvement. Unfortuantely, I was too cheap to buy any intermediate Houdini versions (Houdini went commercial starting with version 2.0), but at least Houdini 1.03 was also MP.

Jun-07-15
Premium Chessgames Member
  AylerKupp: <Investigation of search tree pruning improvements in the last several years> (part 2 of 2)

Summary of Results:
Engine <Nodes/Sec> Max Depth

Komodo 1.03 (SP) <750,800> 26

Komodo 3 (SP) <449,064> 26

Komodo 5 (SP) <428,054> 26

Komodo 5.1 (MP) <1,638,104> 32

Komodo 7 (MP) <1,917,337> 27

Komodo 9.01 (MP) <1,768,387> 24

Houdini 1.03: <4,229,000> 30

Houdini 4: <4,885,000> 33

Stockfish 1.91 <3,202,904> 31

Stockfish 3 <2,816,137> 36

Stockfish 6 <2,928,442> 38

Observations:

1. As expected, the more recent the version of Houdini, Stockfish, and Komodo MP, the quicker it reached a specific search depth. But this did not apply to Komodo SP; the latest SP version (Komodo 5), took longer to reach a specific depth than either of the 2 earlier SP versions, Komodo 1.03 and Komodo 3.

2. Komodo 9.01 seems to be a substantial improvement over Komodo 7 in terms of reaching deeper depths more quickly. So Komodo's claim seems accurate, although its improvement over Komodo 8 (which I didn't run) is probably less than its improvement over Komodo. Still, no challenge to Stockfish.

3. The fastest Nodes/sec rate does not necessarily correspond to the latest engine version. The reduction in Nodes/sec might indicate the additional time spent per node in pruning the search tree (time well spent).

4. As expected the Nodes/sec figure improved dramatically between the Komodo SP and the Komodo MP versions, almost 4X between the latest SP version and the first MP version. However, the changes in the Nodes/sec between all engines' versions was not strictly monotonic.

5. The evaluation trends for all the engines seem consistent except for Stockfish 3 which had substantially different evaluations from d=25 to d=33 for its top line.

6. All engines evaluated the two moves, 14.exd5 and 14.e5 quite closely, with not much to chose between either move. Later versions of Komodo seemed to evaluate 14.e5 somewhat higher than 14.exd5, but not enough to be significant.

Those interested in seeing the ply-by-ply changes and charts of the time-to-depth and evaluation change trends can download an Excel spreadsheet from https://www.mediafire.com/?9v79cm9r....

Jun-07-15
Premium Chessgames Member
  morfishine: <AylerKupp> Thanks for the reply. On some of your comments: <One aspect of these team games that is not mentioned as much as I think it should be are the interactions with the various players and the efforts necessary to support your move preferences, particularly if you feel strongly about a particular move. The latter is both funny and sad; to see the emotional state that some players work themselves into in support for or antagonism against a particular move. > I'm not sure where you are going with this. I really don't know if interactions with other members needs to be mentioned. However, the social aspect is powerful. I've made many friends here at <CG> through such interactions. One learns who is and who isn't their friend(s) pretty quickly. Also, one must proceed with caution since there's a separation through digital communication. For example, what may sound sarcastic was actually not intended, or vice versa. <I agree with you that there isn't much to prove anymore when playing against a single GM...Still, it would be great fun to play against, say, Carlsen or Nakamura, regardless of computer assistance don’t you think?> Well, that depends on your definition of 'fun'. I don't think any single GM has any chance at all of winning...zero. So regardless of whether its Carlsen or Nakamura, where is the challenge? IMO, if there's no challenge, there's no fun. And the other thing that bugs me is no GM has expressed the slightest interest in any meaningful post-mortem. Its invariably “Thanks for the game” and thats it. There's no discussion with the GM, which I think most people desire. <And, yes, I'm also looking forward to the next Battle of the Brains game> Me too. These are not identical to regular OTB conditions of any time control. We have access to opening and ending books and can setup the pieces or use a PGN viewer to work out lines. Still, the lines are purely our own creation without any engine input

*****

Jun-08-15
Premium Chessgames Member
  AylerKupp: <morfishine> I wasn't consciously going anywhere with my statement about needing to support your move preferences if you feel strongly about a particular move. Just mentioning what I think is a fact, some players feel strongly about particular moves and, if they do, they sometimes go to what I think are extremes to either campaign in favor of a particular move or campaign against other moves. That's just human nature, I guess. I think that <RandomVisitor>'s approach is best; provide the data and let others do with it what they wish.

As far as my definition of "fun", I think that having bragging rights is "fun". If we were able to beat Carlsen or Kasparov in a team game with engine assistance then being able to brag about it would be "fun", even if the outcome might have been predictable. But I don't think that the outcome of playing against a <motivated> current or former OTB world chess champion, with the <resources> to have a staff performing engine analyses, and one who is willing to devote <adequate time> to the game is necessarily predictable. As a team we would probably still have the advantage of numbers (unless out opponent was <really> serious about defeating us and was willing and able to hire a staff of 100 players to perform engine analyses) and our opponent would have the advantage of superior chess knowledge, experience, and judgment.

I share your disappointment about not having our GM opponents participate in our post-mortems. I think that Arno Nickel both in his rematch game and to a lesser extent in the first game was the only one who participated extensively in post mortem discussions. Unfortunately, that was before I began to participate in these games.

But in a way I can't blame them for not participating. They are typically very busy people and if they lost the game that was probably one of the reasons, their inability to devote a sufficient amount of time and effort to the game. Besides, it's probably not much fun losing to a bunch of amateurs, whether supported by immense computer power or not, and then having to relive the experience.

Hopefully I'll see you during the next Battle of the Brains.

Aug-18-15
Premium Chessgames Member
  juan31: < AylerKupp> you're are invited to the <♔ Game Prediction Contest for Sinquefield Cup 2015 ♔> Golden Executive forum. Line-up:
Carlsen, Caruana, Nakamura, Anand, Topalov, So, Grischuk, Giri, Aronian, Vachier-Lagrave. < We hope your attendance>
Sep-02-15
Premium Chessgames Member
  Golden Executive: Hi <AylerKupp>. I am so glad you visited my forum to participate in the prediction contest. I feel honored. Thanks.

I want to share with you my 'secret weapon' to make the predictions when I have no idea what to do. I bought it about 30 years ago in Radio Shack: https://www.youtube.com/watch?v=YVX...

Jump to page #    (enter # from 1 to 51)
< Earlier Kibitzing  · PAGE 51 OF 51 ·  Later Kibitzing>

Times Chess Twitter Feed
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other members.
  4. Nothing in violation of United States law.
  5. Don't post personal information of members.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.


NOTE: Keep all discussion on the topic of this page. This forum is for this specific user and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of Chessgames.com, its employees, or sponsors.
Participating Grandmasters are Not Allowed Here!

You are not logged in to chessgames.com.
If you need an account, register now;
it's quick, anonymous, and free!
If you already have an account, click here to sign-in.

View another user profile:
  


home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | advertising | contact us
Copyright 2001-2015, Chessgames Services LLC
Web design & database development by 20/20 Technologies