Members · Prefs · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing User Profile Chessforum
Member since Dec-31-08 · Last seen Aug-29-16
About Me (in case you care):

Old timer from Fischer, Reshevky, Spassky, Petrosian, etc. era. Active while in high school and early college, but not much since. Never rated above low 1800s and highly erratic; I would occasionally beat much higher rated players and equally often lose to much lower rated players. Highly entertaining combinatorial style, everybody liked to play me since they were never sure what I was going to do (neither did I!). When facing a stronger player many try to even their chances by steering towards simple positions to be able to see what was going on. My philosophy in those situations was to try to even the chances by complicating the game to the extent that neither I nor the stronger player would be able to see what was going on! Alas, this approach no longer works in the computer age. And, needless to say, my favorite all-time player is Tal.

I also have a computer background and have been following with interest the development in computer chess since the days when computers couldn't always recognize illegal moves and a patzer like me could beat them with ease. Now it’s me that can’t always recognize illegal moves and any chess program can beat me with ease.

But after about 4 years (a lifetime in computer-related activities) of playing computer-assisted chess, I think I have learned a thing or two about the subject. I have conceitedly defined "AylerKupp's corollary to Murphy's Law" (AKC2ML) as follows:

"If you use your engine to analyze a position to a search depth=N, your opponent's killer move (the move that will refute your entire analysis) will be found at search depth=N+1, regardless of the value you choose for N."

I’m also a food and wine enthusiast. Some of my favorites are German wines (along with French, Italian, US, New Zealand, Australia, Argentina, Spain, ... well, you probably get the idea). One of my early favorites were wines from the Ayler Kupp vineyard in the Saar region, hence my user name. Here is a link to a picture of the village of Ayl with a portion of the Kupp vineyard on the left:

You can send me an e-mail whenever you'd like to aylerkupp(at)

And check out a picture of me with my "partner", Rybka (Aylerkupp / Rybka) from the Masters - Machines Invitational (2011). No, I won't tell you which one is me.


Analysis Tree Spreadsheet (ATSS).

The ATSS is a spreadsheet developed to track the analyses posted by team members in various on-line games (XXXX vs. The World, Team White vs. Team Black, etc.). It is a poor man's database which provides some tools to help organize and find analyses.

I'm in the process of developing a series of tutorials on how to use it and related information. The tutorials are spread all over this forum, so here's a list of the tutorials developed to date and links to them:

Overview: AylerKupp chessforum (kibitz #843)

Minimax algorithm: AylerKupp chessforum (kibitz #861)

Principal Variation: AylerKupp chessforum (kibitz #862)

Finding desired moves: AylerKupp chessforum (kibitz #863)

Average Move Evaluation Calculator (AMEC): AylerKupp chessforum (kibitz #876)


ATSS Analysis Viewer

I added a capability to the Analysis Tree Spreadsheet (ATSS) to display each analysis in PGN-viewer style. You can read a brief summary of its capabilities here AylerKupp chessforum (kibitz #1044) and download a beta version for evaluation.


Chess Engine Evaluation Project

The Chess Engine Evaluation Project was an attempt to evaluate different engines’ performance in solving the “insane” Sunday puzzles with the following goals:

(1) Determining whether various engines were capable of solving the Sunday puzzles within a reasonable amount of time, how long it took them to do so, and what search depth they required.

(2) Classifying the puzzles as Easy, Medium, or Hard from the perspective of how many engines successfully solved the puzzle, and to determine whether any one engine(s) excelled at the Hard problems.

(3) Classifying the puzzle positions as Open, Semi-Open, or Closed and determine whether any engine excelled at one type of positions that other engines did not.

(4) Classifying the puzzle position as characteristic of the opening, middle game, or end game and determine which engines excelled at one phase of the game vs. another.

(5) Comparing the evals of the various engines to see whether one engine tends to generate higher or lower evals than other engines for the same position. If anybody is interested in participating in the restarted project, either post

Unfortunately I had to stop work on the project. It simply took more time that I had available to run analyses on the many text positions for each of the engines. And, it seems that each time that I had reasonably categorized an engine, a new version was released making the results obtained with the previous version obsolete. Oh well.


Ratings Inflation

I have recently become interested in the increase in top player ratings since the mid-1980s and whether this represents a true increase in player strength (and if so, why) or if it is simply a consequence of a larger chess population from which ratings are derived. So I've opened up my forum for discussions on this subject.

I have updated the list that I initially completed in Mar-2013 with the FIDE rating list through 2014 (published in Jan-2015), and you can download the complete data from It is quite large (135 MB) and to open it you will need Excel 2007 or later version or a compatible spreadsheet since several of the later tabs contain more than 65,536 rows.

The spreadsheet also contains several charts and summary information. If you are only interested in that and not the actual rating lists, you can download a much smaller (813 KB) spreadsheet containing the charts and summary information from You can open this file with a pre-Excel 2007 version or a compatible spreadsheet.

FWIW, after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis and it goes something like this:

1. Initially (late 1960s and 1970s) the ratings for the strongest players were fairly constant.

2. In the 1980s the number of rated players began to increase exponentially, and they entered the FIDE-rated chess playing population mostly at the lower rating levels. The ratings of the stronger of these players increased as a result of playing weaker players, but their ratings were not sufficiently high to play in tournaments, other than open tournaments, where they would meet middle and high rated players.

3. Eventually they did. The ratings of the middle rated players then increased as a result of beating the lower rated players, and the ratings of the lower rated players then leveled out and even started to decline. You can see this effect in the 'Inflation Charts' tab, "Rating Inflation: Nth Player" chart, for the 1500th to 5000th rated player.

4. Once the middle rated players increased their ratings sufficiently, they began to meet the strongest players. And the cycle repeated itself. The ratings of the middle players began to level out and might now be ready to start a decrease. You can see this effect in the same chart for the 100th to 1000th rated player.

5. The ratings of the strongest players, long stable, began to increase as a result of beating the middle rated players. And, because they are at the top of the food chain, their ratings, at least so far, continue to climb. I think that they will eventually level out but if this hypothesis is true there is no force to drive them down so they will stay relatively constant like the pre-1986 10th rated player and the pre-1981 50th rated player. When this leveling out will take place, if it does, and at what level, I have no idea. But a look at the 2013 ratings data indicates that, indeed, it may have already started.

You can see in the chart that the rating increase, leveling off, and decline first starts with the lowest ranking players, then through the middle ranking players, and finally affects the top ranked players. It's not precise, it's not 100% consistent, but it certainly seems evident. And the process takes decades so it's not easy to see unless you look at all the years and many ranked levels.

Of course, this is just a hypothesis and the chart may look very different 20 years from now. But, at least on the surface, it doesn't sound unreasonable to me.

But looking at the data through 2015 it is even more evident that the era of ratings inflation appears to be over. The previous year's trends have either continued or accelerated; the rating for every ranking category, except for possibly the 10th ranked player (a possible trend is unclear), has either flattened out or has started to decline as evidenced by the trendlines.

Any comments, suggestions, criticisms, etc. are both welcomed and encouraged.

------------------- Full Member

   AylerKupp has kibitzed 9387 times to chessgames   [more...]
   Aug-29-16 Team White vs Team Black, 2015
AylerKupp: <victim2> I would say that Team White played an excellent game, but maybe I'm biased. ;-)
   Aug-29-16 Chess Olympiad (2016) (replies)
AylerKupp: <perfidious> Are you suggesting that Carlsen is ranked #1 just because he wins a lot of games?
   Aug-27-16 AylerKupp chessforum (replies)
AylerKupp: <zanzibar> Thanks for responding. To answer your questions first, I did write a PGN parser (actually two) using Visual Basic for Applications (VBA). The first one was for my Analysis Tree Spreadsheet (ATSS, see my header above) and converts a *.pgn file into a *.csv file ...
   Aug-26-16 Fischer vs Petrosian, 1966 (replies)
AylerKupp: They were probably pretty close in playing strength. Although FIDE did not start to formally publish its rating list until 1970, Dr. Elo created several unofficial rating lists from 1967 to 1998. In his first list from June 1967 ( ), he ...
   Aug-22-16 Sinquefield Cup (2016) (replies)
AylerKupp: <<Sally Simpson> j'adoube> Great, just great. Too bad that non-chessplayers would not get it.
   Aug-19-16 Rubinstein vs Gruenfeld, 1929 (replies)
AylerKupp: Regardless of whether Gruenfeld had theoretical chances for a draw at the later stages of the game, from a practical perspective, entering into an endgame against Rubinstein with an inferior pawn structure was ill advised. Even with BOC.
   Aug-18-16 Stockfish (Computer) (replies)
AylerKupp: <zanzibar> the question remains why are the evals so radically different for the same position, as shown in the blog?> I don't really know but I don't think that it's a bug in Stockfish 7. I thought that perhaps d=22 was too low since Stockfish usually needs to be run to ...
   Aug-16-16 Robert James Fischer (replies)
AylerKupp: <<Sally Simpson> I vote Alerkupp take up the task, he seems to have a lot of time on his hands. I think he is in prison.> Thanks for the vote of confidence. To update my status, yes, I am currently in prison with a lot of time on my hands. I was released from a mental ...
   Aug-16-16 Computer (replies)
AylerKupp: <Who was the greatest chess player of all time?> (part 2 of 2) 2. The authors used only one engine at any one time. I again know from personal experience that different engines produce different move evaluations and rankings, so who’s to say that one engine's top ranked move
   Aug-14-16 Topalov vs Aronian, 2016 (replies)
AylerKupp: After 52,Rxc3 White mates in 43 moves according to the Lomonosov 7-piece tablebases. And as late as 56...Ra7 Whites mates in 40 moves. The move that threw away the draw is 57.Rd4+. If instead 57.Kb4 White continues to be able to mate in 39 moves. Of course, there is no reason to ...
(replies) indicates a reply to the comment.

De Gustibus Non Disputandum Est

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 52 OF 52 ·  Later Kibitzing>
Premium Chessgames Member
  NeverAgain: Hey, AK!

Seeing how often you refer to your setup as "antiquated" and you being one of the handful of regulars whose posts are always worth reading, I thought I'd share the results of the upgrade I undertook last week.

My main box is even older than your laptop, as you no doubt saw in my posts in the Stockfish thread. Even after overclocking that Core 2 Duo to 3.6 GHz I don't get more than about 2,000 kN/s with the official SF6 32-bit release.

Last week, after several hours of research I decided to break down for a dual Xeon X5670, which I reckoned to be the best deal for chess analysis. In your earlier post you mentioned reaching d=29 in under 3 minutes on your setup. Well, how 'bout 9 *seconds* with the Sep 18 x64 POPCNT (aka "modern"/"sse4") build?

Lenovo ThinkStation D20 Workstation:

benchmark in Arena (The World vs Naiditsch, 2014 position above):

The moves/lines have been edited out for better readability. In short, you reach d=35 (our mutually agreed minimum trustworthiness level) in 3:05 and d=40 in 26:28, with the speed maxing out at just over 13.3 mN/s.

That's quite a complex position to analyze from scratch, you can expect faster speed when you go through the whole game move by move with automatic analysis. That's why I advocate this particular workstation model, for its generous 48 GB memory bank. A 32 GB hash really helps for up to 9,000 mN (about d=36 here), then things slow down somewhat. Also note that I leave one core out of 12 for the system, so you can do other stuff on it at the same time and keep the CPU temps down a bit.

With 5-pcs Syzygy TBs you can reach pretty ridiculous depths in endgames, I've seen d=60+ reached literally within a couple of minutes, with kN/s going over 20,000.

All in all, it's a great deal, IMO.
For comparison to other top systems benchmarking an August build of SF6, see Sedat's site:

As you can see, my box is in the 6th place, above a lineup of more recent i7s, each costing a couple of grand (and most of which are overclocked to boot, which you can't do with this system). Which is pretty damn amazing for an under-$800 box, shipped.

Of the top five entries, four are from the much more recent Xeon E5 crop. Those babies retail for $4000-6000 - a *chip*. I haven't had a good look at the complete workstation prices, they were too scary. ;) So if 1/2 the speed for 1/20th the price is your idea of a good deal, the D20 should be right up your alley.

A note about Sedat's benchmarks: the position and the engine setup he uses allows for some pretty inflated numbers, don't expect to see them in your everyday deep analyses. However, as long as everyone uses the same benchmark, I see no big problem: the numbers should reflect the ratios between various systems.

If that particular model sells out before you get there, or if you opt for a different config, here's a handy Amazon search link for "Dual Xeon X5670":

Note that most of these come with an OS (typically Win7 Pro x64), but some may come without, so read the small print in the descriptions carefully. And of course, they are all refurbished (the MSRP on this particular box was close to 10 grand) but as the seller has a nearly 100% feedback rating, I think it's a safe choice.

Please post *your* benches if you decide to go for it. ;)

Premium Chessgames Member
  AylerKupp: <NeverAgain> Thanks for the information and links. I have been thinking about upgrading my system for the last 2 years but, just like my analog Sony TV that I was happy with for so many years, my current "oldish" system has been running reasonably well for almost everything I use it for and it just grates me to replace something that is still working reasonably well. But I finally gave up on my TV and I replaced it, and the other analog TVs I had in the house, with recent LCD TVs.

In my experience in order to get a noticeable increase in performance you need to double your system's speed. In my case that would be a minimum of 8 processors. But that's not that much anymore, so I have set my sights on what I refer to as my "sweet 16" system; 16 cores, 16 GB RAM, 16 TB disk. But 16 GB of RAM is not that much anymore, and 16 TB disk is probably overkill and unnecessary unless I wanted to install 7-piece Lomonosov tablebases, and then it wouldn't be enough.

Christmas is coming up and so is my birthday (January). So maybe those will be the excuse and push I need to do it. And you can count on my posting benchmarks; that will probably be one of the first things I do when I get a new system to see how much faster the new system is compared to my old one.

One question, though. Why are you only running 5-piece Syzygy tablebases when 6-piece are relatively compact (about 160 GB)? You can get SSDs that will hold them easily and provide you fast access. But even that is probably not necessary.

Premium Chessgames Member
  NeverAgain: SSDs are a must for the 6-men Syzygy, as these tables feature a dozen files larger than 1 GB and two over 2 GB. Reading those repeatedly from a regular HDD will kill the NPS *and* the drive quickly, heh.

At the time of your post I already had the 6-men Syzygy downloaded, two torrents a little under 150 GB combined - thanks to the 64 Mbit cable, I shudder to think how long it would have taken to leech them with my old DSL, never mind dial-up! - but had trouble making them work on my main box, the 32-bit XP Core 2 Duo one. Both Komodo and SF kept throwing up an exception and dying as soon as TB were accessed. Seems like a 64-bit OS is another requirement, as they work fine on the Win 7 Pro x64 with the new 240 GB Samsung SSD (from Amazon as well).

You are right - it was all worth the hassle in the end, though. With Komodo 9.2 I get usually around 14-15 mN/s. With the 3+4+5 Syzygy it went up to 20+ in endgames. With the 6-men TB I have seen over 30!

BTW, Komodo has been going the SF way pruning-wise. Remember those earlier versions, v5 and v6 (now both freeware) - they were like Rybka and Houdini, starting in low-to-mid teens and slowly working their way over the 20-ply hump. Then v9.02 started reaching mid-20s pretty quickly, and now v9.2 will typically get you to d=30 in seconds!

I agree with you on HDD space, those terabytes are completely redundant for chess. What engines support the Lomonosov TB anyway? And is there even consumer-grade hardware capable of handling them? I think for now it's strictly super-comp stuff for scientists and enthusiasts with more disposable income than common sense (wish I were one of the latter ;).

As for memory, I urge you not to consider anything below 24 GB. This way you'll be able to have a 16 GB hash table, at least. Remember, that its size is limited to powers of two (for the mainstream engines, at least). Additionally, Komodo will use some 10% less of the hash table allocated. So if you go for a 16 GB box now your max hash will be 8 GB. That's good for about 4,000 mN, give or take a couple hundred, not an exceedingly impressive number for someone who habitually does overnight analyses.

I did some hash tests with SF, and it seems to perform progressively slower (500-1,000 kN/s) as you go over 1 gig. That's on individual positions. If you do the whole game at once - either automatically, or by sliding back or forth a ply at a time - the benefits of a large hash really come to the fore. In some middle-game positions it can take over 5 minutes to get to d=35, about 10 min to d=38. Then you slide a ply and - boom - you get d=33 almost instantly!

Go for at least 24, I say and if you can find (and afford) 48 or 96 you won't regret it in the long run.

Premium Chessgames Member
  NeverAgain: <With the 3+4+5 Syzygy it went up to 20+ in endgames. With the 6-men TB I have seen over 30!>

Heh. The question is, if you put the Lomonosov TB on a today's Cray equivalent, will it be *over 9,000!* ?

Premium Chessgames Member
  AylerKupp: <NeverAgain> I had found out some time ago that 6-piece Syzygy tablebases don't work on 32-bit machines. I suspect that there are some 64-bit data structures that don't map well or were improperly mapped to 32-bit machines and that were not properly tested. After all, who bothers with 32-bit machines these days? But, as you have also properly found out, if you restrict the tablebases to contain only 5-pieces then they work just fine. So I run with 5-piece Syzygy databases on my 32-bit machines and with 6-piece databases in my slow 64-bit laptop, and neither has an SSD.

Yes, I know about Komodo entering the search tree pruning wars with Stockfish. This started with Komodo 9.01 and its Selectivity (minimum 10; maximum 200; default 80 for Komodo 9.01 and 9.02, 68 for Komodo 9.1, and 74 for Komodo 9.2; I guess they must still be calibrating it for best overall performance) and Reduction (minimum -1500, maximum 150, default 0) parameters. I have been curious for a while but haven't had the time (and keep forgetting) to try a race between Stockfish and Komodo with its Selectivity and Reduction parameters set at the maximum and see what happens. At least Komodo gives you the option to change them.

I don't think that any engine yet that supports 7-piece Lomonosov tablebases nor do I know whether the tablebase probing code has been released (a quick search did not show anything) because their size, approximately 140 TB, is pretty much out of reach for mere mortals like us. So, without a ready market capable of having the tablebases, there wouldn't likely be any motivation for the engine developers to include their usage in their engines, particularly if, in the absence of freely distributed tablebase probing code, they would have to invest their resources to develop it.

And, yes, I will definitely get as much memory as I can afford when the time comes. I remember quite a few years ago when a friend told me that he had a spare memory board (yes, they were boards in those days) and whether I needed more memory in my computer. I of course told him that one always needs more memory that one has.

Premium Chessgames Member
  juan31: ♔ ANNOUNCEMENT ♔
<Game Prediction Contest> for the <Bilbao Masters 2015> tournament. In <Golden Executive Forum>

Round 1 will be played next Monday October 26 16:00 CET(UTC+1:00). The posting pairings (probably sometime Sunday). At that point, the game prediction contest will be open.


Viswanathan Anand, Anish Giri, Liren Ding, Wesley So.

Rounds: 6 (double round robin).
Sofia and Bilbao rules apply.


Premium Chessgames Member
  juan31: ♔ ANNOUNCEMENT ♔ <Game Prediction Contest> for the <Bilbao Masters 2015> tournament. In <Golden Executive Forum>. < IS OPEN > < TODAY> Line-up: Viswanathan Anand, Anish Giri, Liren Ding, Wesley So.

Rounds: 6 (double round robin).
Sofia and Bilbao rules apply.


Premium Chessgames Member
  NeverAgain: AK: time to gather the stones ;)

Y Gusev vs E Auerbach, 1946

Premium Chessgames Member
  juan31: To < AylerKupp>

In < Golden Executive forum > <Game Prediction Contest> for the <London Chess Classic 2015> tournament. Round 1 will be played next Friday December 4 16:00 GMT(UTC +0:00).

Carlsen, Caruana, Nakamura, Anand, Topalov, Grischuk, Giri, Aronian, Vachier-Lagrave, Adams.

Rounds: 9

The full pairings are released; the game prediction contest is open right now.

Just post your predictions before each round begins.

Premium Chessgames Member
  Golden Executive: Merry Christmas and a Happy New Year 2016 to you and yours <AylerKupp>!
Premium Chessgames Member
  WinKing: Merry Christmas & Happy Holidays to you <AylerKupp>!
Premium Chessgames Member
  wordfunph: <AylerKupp> Merry Christmas!
Jan-02-16  Dionyseus: <AylerKupp> My computer analysis shows that Fischer could have drawn the first game in his 1972 match against Spassky with 31...Ke7. I posted my analysis at Spassky vs Fischer, 1972
Apr-12-16  thegoodanarchist: Anyone know the last time a human beat one of the top programs in a game?

I was looking at Karjakin vs Deep Junior, 2004 when the question popped into my head.

Apr-12-16  Karposian: <tga> I think it may have been this game:

Ponomariov vs Fritz, 2005

Ponomariov had a bit of luck, though. Fritz made a strange blunder: 39..Bc2?

Apr-12-16  Karposian: Hi, <AylerKupp>. I was just wondering if you have read this Chessbase article that came out a couple of days ago?

It's about the ELO Rating System, and the tendency for higher rated players to underperform relative to the theoretical ELO probability, and for lower-rated players to overperform:

I thought you perhaps would find it interesting, considering your work with your game prediction spreadsheet.

Premium Chessgames Member
  AylerKupp: <thegoodanarchist> I'm assuming that you mean a "normal" game and not an odds game or a game with another engine assisting a top player like the 2014 Nakamura – Stockfish match or this year's Nakamura – Komodo match. Here is a link to a list of recent Komodo vs. human handicap matches: And these might be the latest victory by a human (GM Petr Neuman vs. Komodo 9, 2015, 2- and 1-pawn odds). Neuman won the match +3, =2, -1.

But, to try to answer your question, no, I don't know the last time that a human beat a computer in a non-odds match. I suspect that <Karposian> might be right and Ponomariov vs Fritz, 2005 might have been the last time. At least according to this article:

Quite a difference from today's GOTD: Bronstein vs M20, 1963 !

Premium Chessgames Member
  AylerKupp: <Karposian> No, I had not read that Chessbase article. Thanks for the link!

I had read Sonas' earlier article,, that described a similar effect, although not as pronounced. And also this article,, which also shows a similar effect but for a different reason, indicates the consequences, and alludes that this might be the reason for ratings inflation at the top level.

What makes Viswanath's article difference is that he associates the discrepancy between theoretical and actual performance to young players. But I don't know if the results he sees are literally due to "age" or if by "young" it could mean new to the game, not necessarily chronological age. If the latter is the case then it could simply mean that the formulas for estimating a young/new player need to be adjusted by, for example, having more K-factors than the 3 currently in use and using extrapolation to try to compensate for the lag between the young/new player's calculated rating and this effective rating due to his relatively fast rate of improvement. And his sample size is fairly small, 100 games, so I'm not sure if his results are statistically significant.

There are several errors in his article which make me wonder a little bit. For example, early on he says that "The original ELO formula proposes that a player who is 100 points higher rated should win a game with a 64% probability, and a 200 point difference gives approximately a 75% chance of winning for the higher rated opponent." This is not correct; a player who is 100 points higher rated had a win <OR> draw probability of 0.64, not a win probability of 0.64, which is definitely not the same thing. And originally Dr. Elo used a normal distribution for his calculations and it wasn't until later that a logistic distribution was found to give a better result between predicted and actual performance. But this is a minor point and only really significant when the rating differences are large.

Apr-13-16  thegoodanarchist: <AylerKupp: <thegoodanarchist> I'm assuming that you mean a "normal" game and not an odds game or a game with another engine assisting a top player like the 2014 Nakamura – Stockfish match>

Yes, in fact I almost typed "normal game" in my post, but figured you would assume it.

Apr-13-16  thegoodanarchist: <Karposian: <tga> I think it may have been this game:

Ponomariov vs Fritz, 2005 >

Cool, thank you!

Premium Chessgames Member
  WinKing: Only 2 more days!!!

♘Norway 2016♘ !!! Norway 2016 !!! ♗Norway 2016♗

This tournament will run from April 19th thru April 29th 2016.

Participants include: Magnus Carlsen, Vladimir Kramnik, Anish Giri, Li Chao, Maxime Vachier-Lagrave, Levon Aronian, Veselin Topalov, Pavel Eljanov, Pendyala Harikrishna & Nils Grandelius


<<> Norway 2016! <>>

< 3 Prediction Contests: (Win virtual medals - Gold, Silver & Bronze) >

User: lostemperor - Predict the order the players will finish. (3 categories to medal in)

User: Golden Executive - Predict the result 1-0, 1/2, or 0-1 (3 categories to medal in)

This year will be the 10th Anniversary for this contest! (from 2007 to 2016 - 10 years running)

User: OhioChessFan - Predict the result 1-0, 1/2, or 0-1 & the number of moves. (4 categories to medal in)

All three of the organizers <lostemperor>, <Golden Executive> & <chessmoron> have confirmed they will be running their contests for this event.


Also don't forget about <chessgames> ChessBookie game for this event. He can't wait to take some or all of your chessbucks. ;)

ChessBookie Game

Don't miss out on the fun for this Super Event!!!

Premium Chessgames Member
  zanzibar: Hi <AK>, just a note saying hello as a placeholder for any questions you might have re: filter program.


Premium Chessgames Member
  zanzibar: Just wondering, <AK>, if you plan on writing PGN parsing code in your steering program?

By this I mean, to the level of knowing if a move is legal or not, and to be able to map each move in an engine variation to a FEN yourself.

If you intend to write code at that level of detail, are you planning on using bitboards?

Premium Chessgames Member
  AylerKupp: <zanzibar> Thanks for responding. To answer your questions first, I did write a PGN parser (actually two) using Visual Basic for Applications (VBA). The first one was for my Analysis Tree Spreadsheet (ATSS, see my header above) and converts a *.pgn file into a *.csv file suitable for importing into Excel (I consider myself an Excel junkie). For each game certain fields in the game's header plus the game's moves fit into a single line of the *.csv file. This version of the PGN parser still does not handle comments since I've never been sufficiently motivated to fix it and something else always seems to be more important.

This PGN Parser version optimistically assumes that that the game moves in the *.pgn file are legal so it does not make legality checks. But as part of the ATSS (which has a viewer) I did implement sort of a legality move checker since each time a move is made it has to figure out the square that the moved piece came from. I think that I used a 12x12 square board (so that I didn't have to make special checks for a knight placed in a corner) and I indicated for each square whether it was empty or occupied by a piece or pawn.

I was proud that the move checker was table driven with a minimum of special coding. For each piece (and pawns) I had a table describing the pieces' move capabilities with special coding only for castling and pawn moves, captures (including en passant), and promotions. So that serves a similar purpose, if no source square can be found for the specified piece being moved then it's an illegal move. It shouldn't be too difficult to convert the needed VBA code to Python code.

The second PGN parser is a modification of the first which I'm using for the development of what I call the Single Game Predictor (SGP) for attempting to predict the results of individual games in major tournaments, a contest currently hosted by User: golden executive. It differs from the first PGN parser in that it looks and saves different fields and skips all the game moves. And, since it skips game moves, it doesn't care about legal or illegal moves.

As far as the SGP itself, it's proven to be more difficult and time consuming than I thought. I created a prototype that I used in 2 tournaments, and it successfully predicted 72% of the game outcomes in one and 69% of the game outcomes in the other. But I realized that the approach I used there was a dead end and I restarted with a different approach. And then my hard drive crashed and I didn't have any up to date backups (of course!) so I had to take my drive to a data recovery specialist that I hope will be able to recover most of the data. In the meantime, I'm writing a description of the SGP, what I have done and what I think I need to do so that when I get my data back (you have to be optimistic about these things!), I can proceed at a faster rate. I hope to have it finished in time for the London Classic in early Dec-2016.

As far as your filter program, could tell me in general terms what it does and how it's structured? I understand if you are hesitant to do so since I don't know what plans, if any, you have for it but I would be interested and appreciate anything that you are willing to share at this time. I was thinking of trying to write something like it to automate the data gathering for various tournaments so that I can validate the predictive ability of the SGP.

Premium Chessgames Member
  zanzibar: <AK> that's a lot to mull over...

We have different approaches, and toolsets, it seems. I don't use Excel so much, but rely on Octave for analysis (basically a Matlab clone).

None of the software I'm been writing uses any board representation - I generally treated the PGN movelist as a string for the PGN module I use. It was a practical decision, since I mostly wanted to map each game into an object where I could manipulate the tags (e.g. for normalization, etc.).

I do have some specialized routines to strip out comments and variations in order to compare the movelist strings between games, looking for duplicates or divergences etc.

This module has served me in good steed, and allowed me to work through a lot of Bistro stuff.

My recent engine steering program was cobbled together quickly too, in about a week. It's design goal is two-fold, to look for combinations, and to do "blunder"-checks.

This is a work-in-progress, but I'll be glad to describe it in the rough. It's really very simple-minded. So far, the vast bulk of the work is just getting the framework setup and working.

The first thing I did was attempt to replicate ChessTempo or Chess Tactics Server: (CTS) (CT)

One idea is to scan games looking for those that end in a mate, or a mate-in-progress. The idea is to backtrack from the end of the game until there isn't a forced mate, and go forward one ply.

This works rather well, until the mates are longish. Then a player might find suboptimal moves that still preserve the mate.

I could replicate <CT>, and diverge from the gameplay - but I haven't done so as yet. Besides, sometimes there's lessons to be learned from the different approaches.

Next, I added another routine to try to filter combinations, again searching from the end of the game, backwards.

Here I looked for moves that I denote as "sharp". That means running the engine in MPV mode, say 3. Then the main line (ML) is required to be demonstrably better than the other lines, by some programmable threshold.

That's what I mean by "sharp". There's a best move, and it's incontestable.

In addition, I require the game to be "won", as defined by another threshold.

Having found such a position in a game, I then backtrack again, as long as the position is sharp.

Doing so, I unwind the combination, hopefully back to it's start.

I would rate this effort a provisional success as well, especially given how crude the heuristic is.

Feeling a little bit overconfident, since I used a tactics-rich dataset for testing, I then decided to filter Sinquefield 2016.

Utter failure, only one game (Topalov--Svidler (R1)) makes the cut.

The reason, of course, is that super-GM's rarely make outright tactical blunders in classic games.

So, a refinement for my filtering-program was necessary. The idea is to find interesting positions, but with much lower thresholds.

Again, I look for "sharp" positions - but now I play the game forwards, and look at all positions, starting at move 10. The threshold is lowered to 0.75 at the moment. No win is required, just a clear "best-move".

This effort yields some modest results.

At this level of play, the hits are few enough that it's quite manageable to hand scan the games. I did this, using it to guide me to certain games, where I could compare my results with the kibitzing on <CG>.

It seems to be somewhat useful, though definitely still in need of refinement.

Jump to page #    (enter # from 1 to 52)
< Earlier Kibitzing  · PAGE 52 OF 52 ·  Later Kibitzing>

from the Chessgames Store
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No posting personal information of members.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.

NOTE: Keep all discussion on the topic of this page. This forum is for this specific user and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of, its employees, or sponsors.
Participating Grandmasters are Not Allowed Here!

You are not logged in to
If you need an account, register now;
it's quick, anonymous, and free!
If you already have an account, click here to sign-in.

View another user profile:

home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | advertising | contact us
Copyright 2001-2016, Chessgames Services LLC
Web design & database development by 20/20 Technologies