chessgames.com
Members · Prefs · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

 
Chessgames.com User Profile Chessforum

AylerKupp
Member since Dec-31-08 · Last seen Sep-21-17
About Me (in case you care):

Old timer from Fischer, Reshevsky, Spassky, Petrosian, etc. era. Active while in high school and early college, but not much since. Never rated above low 1800s and highly erratic; I would occasionally beat much higher rated players and equally often lose to much lower rated players. Highly entertaining combinatorial style, everybody liked to play me since they were never sure what I was going to do (neither did I!). When facing a stronger player many try to even their chances by steering towards simple positions to be able to see what was going on. My philosophy in those situations was to try to even the chances by complicating the game to the extent that neither I nor the stronger player would be able to see what was going on! Alas, this approach no longer works in the computer age. And, needless to say, my favorite all-time player is Tal.

I also have a computer background and have been following with interest the development in computer chess since the days when computers couldn't always recognize illegal moves and a patzer like me could beat them with ease. Now itís me that canít always recognize illegal moves and any chess program can beat me with ease.

But after about 4 years (a lifetime in computer-related activities) of playing computer-assisted chess, I think I have learned a thing or two about the subject. I have conceitedly defined "AylerKupp's corollary to Murphy's Law" (AKC2ML) as follows:

"If you use your engine to analyze a position to a search depth=N, your opponent's killer move (the move that will refute your entire analysis) will be found at search depth=N+1, regardless of the value you choose for N."

Iím also a food and wine enthusiast. Some of my favorites are German wines (along with French, Italian, US, New Zealand, Australia, Argentina, Spain, ... well, you probably get the idea). One of my early favorites were wines from the Ayler Kupp vineyard in the Saar region, hence my user name. Here is a link to a picture of the village of Ayl with a portion of the Kupp vineyard on the left: http://en.wikipedia.org/wiki/File:A...

You can send me an e-mail whenever you'd like to aylerkupp gmail.com.

And check out a picture of me with my "partner", Rybka (Aylerkupp / Rybka) from the CG.com Masters - Machines Invitational (2011). No, I won't tell you which one is me.

-------------------

Analysis Tree Spreadsheet (ATSS).

The ATSS is a spreadsheet developed to track the analyses posted by team members in various on-line games (XXXX vs. The World, Team White vs. Team Black, etc.). It is a poor man's database which provides some tools to help organize and find analyses.

I'm in the process of developing a series of tutorials on how to use it and related information. The tutorials are spread all over this forum, so here's a list of the tutorials developed to date and links to them:

Overview: AylerKupp chessforum (kibitz #843)

Minimax algorithm: AylerKupp chessforum (kibitz #861)

Principal Variation: AylerKupp chessforum (kibitz #862)

Finding desired moves: AylerKupp chessforum (kibitz #863)

Average Move Evaluation Calculator (AMEC): AylerKupp chessforum (kibitz #876)

-------------------

ATSS Analysis Viewer

I added a capability to the Analysis Tree Spreadsheet (ATSS) to display each analysis in PGN-viewer style. You can read a brief summary of its capabilities here AylerKupp chessforum (kibitz #1044) and download a beta version for evaluation.

-------------------

Ratings Inflation

I have recently become interested in the increase in top player ratings since the mid-1980s and whether this represents a true increase in player strength (and if so, why) or if it is simply a consequence of a larger chess population from which ratings are derived. So I've opened up my forum for discussions on this subject.

I have updated the list that I initially completed in Mar-2013 with the FIDE rating list through 2016 (published in Jan-2017), and you can download the complete data from http://www.mediafire.com/file/zbrlx.... It is quite large (158 MB) and to open it you will need Excel 2007 or later version or a compatible spreadsheet since several of the later tabs contain more than 65,536 rows.

The spreadsheet also contains several charts and summary information. If you are only interested in that and not the actual rating lists, you can download a much smaller (813 KB) spreadsheet containing the charts and summary information from http://www.mediafire.com/file/k9i67.... You can open this file with a pre-Excel 2007 version or a compatible spreadsheet.

FWIW, after looking at the data I think that ratings inflation, which I define to be the unwarranted increase in ratings not necessarily accompanied by a corresponding increase in playing strength, is real, but it is a slow process. I refer to this as my "Bottom Feeder" hypothesis and it goes something like this:

1. Initially (late 1960s and 1970s) the ratings for the strongest players were fairly constant.

2. In the 1980s the number of rated players began to increase exponentially, and they entered the FIDE-rated chess playing population mostly at the lower rating levels. The ratings of the stronger of these players increased as a result of playing weaker players, but their ratings were not sufficiently high to play in tournaments, other than open tournaments, where they would meet middle and high rated players.

3. Eventually they did. The ratings of the middle rated players then increased as a result of beating the lower rated players, and the ratings of the lower rated players then leveled out and even started to decline. You can see this effect in the 'Inflation Charts' tab, "Rating Inflation: Nth Player" chart, for the 1500th to 5000th rated player.

4. Once the middle rated players increased their ratings sufficiently, they began to meet the strongest players. And the cycle repeated itself. The ratings of the middle players began to level out and might now be ready to start a decrease. You can see this effect in the same chart for the 100th to 1000th rated player.

5. The ratings of the strongest players, long stable, began to increase as a result of beating the middle rated players. And, because they are at the top of the food chain, their ratings, at least so far, continue to climb. I think that they will eventually level out but if this hypothesis is true there is no force to drive them down so they will stay relatively constant like the pre-1986 10th rated player and the pre-1981 50th rated player. When this leveling out will take place, if it does, and at what level, I have no idea. But a look at the 2016 ratings data indicates that, indeed, it may have already started.

You can see in the chart that the rating increase, leveling off, and decline first starts with the lowest ranking players, then through the middle ranking players, and finally affects the top ranked players. It's not precise, it's not 100% consistent, but it certainly seems evident. And the process takes decades so it's not easy to see unless you look at all the years and many ranked levels.

Of course, this is just a hypothesis and the chart may look very different 20 years from now. But, at least on the surface, it doesn't sound unreasonable to me.

But looking at the data through 2016 it is even more evident that the era of ratings inflation appears to be over. The previous year's trends have either continued or accelerated; the rating for every ranking category, except for possibly the 10th ranked player (a possible trend is unclear), has either flattened out or has started to decline as evidenced by the trendlines.

-------------------

Chess Engine Non-Determinism

I've discussed chess engine non-determinism many times. If you run an analysis of a position multiple times, with the same engine, the same computer, and to the same search depth, you will get different results. Not MAY, WILL. Guaranteed. Similar results were reported by others.

I had a chance to run a slightly more rigorous test and described the results starting here: US Championship (2017) (kibitz #633). I had 3 different engines (Houdini 4, Komodo 10, and Stockfish 8 analyze the position in W So vs Onischuk, 2017 after 13...Bxd4, a highly complex tactical position. I made 12 runs with each engine; 3 each with threads=1, 2, 3, and 4 on my 32-bit 4-core computer with 4 MB RAM and MPV=3. The results were consistent with each engine:

(a) With threads=1 (using a single core) the results of all 3 engines were deterministic. Each of the 3 engines on each of the analyses selected the same top 3 moves for each engine, with the same evaluations, and obviously the same move rankings.

(b) With threads =2, 3, and 4 (using 2, 3, and 4 cores) none of the engines showed deterministic behavior. Each of the 3 engines on each of the analyses occasionally selected different analyses for the same engine, with different evaluations, and different move rankings.

I've read that the technical reason for the non-deterministic behavior is the high sensitivity of the alpha-beta algorithms that all the top engines use to move ordering in their search tree, and the variation of this move ordering using multi-threaded operation when each of the threads gets interrupted by higher-priority system processes. I have not had the chance to verify this, but there is no disputing the results.

What's the big deal? Well if the same engine gives different results each time it runs, how can you determine what's the real "best" move? Never mind that different engines or relatively equal strength (as determined by their ratings) give different evaluations and move rankings for their top 3 move and that the evaluations may differ as a function of the search depth.

Since I believe in the need to run analyses of a given position using more than one engine and then aggregating the results to try to reach a more accurate assessment of a position, I typically have run sequential analyses of the same position using 4 threads and a hash table = 1,024 MB. But since I typically run 3 engines, I found it to be more efficient to run analyses using all 3 engines concurrently, each with a single thread and a hash table = 256 MB (to prevent swapping to disk). Yes, running with a single thread runs at 1/2 the speed of running with 4 threads but then running the 3 engines sequentially requires 3X the time and running the 3 engines concurrently requires only 2X the time for a 50% reduction in the time to run all 3 analyses to the same depth, and resolving the non-determinism issues.

So, if you typically run analyses of the same position with 3 engines, consider running them concurrently with threads=1 rather than sequentially with threads=4. You'll get deterministic results in less total time.

-------------------

Any comments, suggestions, criticisms, etc. are both welcomed and encouraged.

-------------------

Chessgames.com Full Member

   AylerKupp has kibitzed 10256 times to chessgames   [more...]
   Sep-21-17 World Cup (2017) (replies)
 
AylerKupp: <<JPi> Ding Liren play better than So here.> <SugarDom> So yonara! Be careful. You might be labeled a So-hater.
 
   Sep-20-17 AylerKupp chessforum
 
AylerKupp: <cro777> Thanks for the link. I just saw this post since I don't check my forum very often. It's a lot to digest and it will take me a while since I have several other things to attend, but I'll get back to you, hopefully soon.
 
   Sep-16-17 Aronian vs Ivanchuk, 2017 (replies)
 
AylerKupp: <<extremepleasure2> Wesley So's peak rating was 2822 and his current rating is 2810. Vallejo Pons' peak rating was 2724 and his current rating is 2710.> No, per the latest FIDE rating list (Sep-2017) So's rating is 2792 and Vallejo-Pons' rating is 2717. Still, using ...
 
   Sep-13-17 A Giri vs Ivanchuk, 2017 (replies)
 
AylerKupp: <Sally Simpson> Karjakin? I think that maybe you've been posting on this site too much!
 
   Sep-12-17 F Rhine vs D Burris, 1997 (replies)
 
AylerKupp: <<flimflam48> Chess & computers! What's the point of using a search engine...or 2 or 3!!.. because it's a form of cheating!> As <FSR> said, itís not cheating if they let you do it. Besides, it takes a lot of time and effort to use engines effectively. And if ...
 
(replies) indicates a reply to the comment.

De Gustibus Non Disputandum Est

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 53 OF 53 ·  Later Kibitzing>
Nov-03-16
Premium Chessgames Member
  AylerKupp: <diceman> BTW, I just downloaded and installed the latest Stockfish 8. I had it analyze the same position that you gave earlier:


click for larger view

But whereas it took Stockfish 7 3:09 minutes of calculation on my machine to find a mate in 18 for Black at d=33, it took Stockfish 8 only 1:02 minutes to find a slightly different mate in 18 at d=30. So I would say that in this case, Stockfish 8 is definitely better than Stockfish 7.

I can hardly wait for Stockfish 9!

Nov-08-16
Premium Chessgames Member
  AylerKupp: <Tiggler> I looked at Table 8.1b per https://www.fide.com/fide/handbook.... and recorded both the lowest and midpoint values of the "RtgDif" column. I then plugged in these values into both the Excel 2003 NORMDIST function [ NORMDIST(x,Mean,StdDev,TRUE) ] with the Mean = 0 and StdDev = 200 and the TRUE giving the CDF. I also looked at the CDF definition in both WikiPedia and Wolfram and calculated the CDF using both the lowest and midpoint values of the "Rtg Dif" column and the formula CDF = Ĺ*[1+ERF(z)] where z = (x Ė Mean)/(StdDev*SQRT(2) ] and Excel 2003's ERF function.

And, in an attempt to ensure that Excel was not providing the wrong values, I looked up a published table of the Normal CDF (https://homes.cs.washington.edu/~jr...).

After rounding all values to two decimal places to correspond with FIDE's table, this is what I got with FIDE's table shortened to list only every 10th value (because I'm lazy):

RtgDif Low Mid H X(H(M)) E(H(M)) Table
0-3 0 2 0.50 0.50 0.50 0.50
69-76 69 73 0.60 0.64 0.64 0.64
146-153 146 150 0.70 0.77 0.77 0.77
236-245 236 241 0.80 0.89 0.89 0.89
358-374 358 366 0.90 0.97 0.97 0.97
> 735 736 768 1.00 1.00 1.00 1.00

where:

Low = Low value of RtgDif

Mid = Midpoint value (rounded) of RtgDif(Low) and RtgDif(High). For RtgDif > 735 I used the arbitrary upper bound = 800 to calculate the midpoint value, but this doesn't affect anything.

H = FIDE's CDF value for upper side of the CDF curve

X(H(M)) = CDF value calculated by Excel using NORMDIST and the midpoint of RtgDif

E(H(M)) = CDF value calculated by Excel using ERF and the midpoint of RtgDif

Table = Value of CDF from the published table after dividing each midpoint of the FIDE RtgDiff by 200 to reflect that the table used StdDev = 1.

As you can see, my CDF values calculated using Excel and the published table match, but they don't match the FIDE table.

I repeated the calculations using Excel 2010 and the supposedly improved functions NORM.DIST and ERF.PRECISE functions but got the same results as when using Excel 2003's NORMDIST and ERF functions. Probably not surprising given that I rounded everything off to 2 significant digits to match the accuracy of FIDE's Table 8.1b.

I checked and you are right, FIDE still uses the Normal Distribution curves rather than the Logistic Curves. Maybe I got confused because both the USCF and Glickman's system (Glicko) use the Logistic Distribution curves and in more than one place I've read that the Logistic Distribution Curves give a better fit to the actual data than the Normal Distribution curves. If that's indeed true, silly of me to think that FIDE would modify their system to make it more accurate. After all, what can you expect from an organization which institutes the Rule of 400 (originally the Rule of 350) to satisfy some of their GM members who complained of losing too many rating points if they lost to a much lower rated opponent. As though their losing to a much lower rated opponent would be the fault of the rating system!

Still, I sort of see FIDE's problem. If they changed their ratings calculation procedure then it would make it more difficult to compare a player's performance over time since some of their ratings would have been calculated using the Normal Distribution curves and some the Logistic Distribution curves. Still, if using the Logistic Distribution curves was indeed more accurate (although I doubt that the differences would be significant), then perhaps it could be gradually phased in by calculating the ratings of any players who already have a rating using the Normal Distribution curves and only using the more accurate curves for newly-rated players.

Nov-08-16
Premium Chessgames Member
  Tiggler: Try using StdDev = 2000/7 .

I did and then I checked every integer rating difference from 0 to 800. The difference from FIDE tables for these 801 values is 0.00 every time.

Nov-09-16
Premium Chessgames Member
  AylerKupp: <Tiggler> You're right, that makes the FIDE tables match the calculated Normal CDF values. And I also found your original post of 3-23-16. I guess we shouldn't blame Dr. Elo too much for using the approximation SQRT(2) = 0.7 since he was doing the calculations using pencil and paper and needed to simplify them as much as possible without introducing significant errors.

But what does this do to the accuracy of the FIDE ratings calculations? Or, more to the point, how close do the FIDE rating calculations using the Normal CDF match actual game results and would FIDE's rating calculations match game actual results better using a different distribution?

I started to try to do that for other reasons. I found a *.pgn games database, KingBase, which matched my needs perfectly and was free to download. I contains about 1.8 million games played since 1990 with no games less than 6 moves and no games where either player was rated below 2000. Unfortunately, even though its authors claim that it is updated monthly, there appears to have been no updates since Mar-2016.

I had written a parser for *.pgn files that I adapted to create *.csv files from the *.pgn files and loaded the *.csv files into Excel. Once in Excel I could filter out games played prior to 1998 (when FIDE stopped rounding ratings to the nearest 5 points) and by filtering out games that had the words "Simul", "Blind", "Blitz", "Rapid", and others I was able to compile a database of about 1.6 million OTB games played at classical time controls according to the ratings of the players in increments of 100 rating points. I could then determine the results distribution for White Wins, White Draws, and White Loses for each of the rating levels. My goal was to try to find the best-fit distribution(s) for each game result and rating level.

Alas, before I could finish it I lost all my data, including my *.pgn parser, due to the aforementioned disk crash and I don't know if I will ever have the time and motivation to regenerate it. Too bad.

Nov-09-16
Premium Chessgames Member
  Tiggler: <I guess we shouldn't blame Dr. Elo too much for using the approximation SQRT(2) = 0.7 since he was doing the calculations using pencil and paper and needed to simplify them as much as possible without introducing significant errors.>

So far as the value chosen for the s.d., accuracy doesn't really enter into it because the choice is arbitrary. Dr. Elo could have chosen sd = 1, and mean rating = 0 . Then we could just use the univariate standard normal distribution.

The chosen distribution and the game results generate the rating scale, not the other way round.

Nov-10-16
Premium Chessgames Member
  AylerKupp: <Tiggler> I only partly agree. True, the value that we use for the SD is largely arbitrary, but in his book "The Rating of Chessplayers, Past and Present" Dr. Elo claims to have chosen SD = 200 as the class interval by convention.

What matters, at least for my purposes, is the <predictive> accuracy of the rating system. After all, in the Elo system it's the rating differential that defines the expected result of a sufficiently large series of games between two players. A rating system that more accurately predicts this expected result of a sufficiently large number of players is a "better" (in the sense of being more accurate) than another system that is not as accurate. And, since the accuracy of a rating system is at least partly based on the probability distribution used, the selection of probability distribution and its CDF will have an influence in its predictive ability.

Nov-10-16
Premium Chessgames Member
  Tiggler: <AylerKupp> I have to tell you of a curious result that I discovered, and which <Gypsy> helped me to understand a few years ago.

Here it is:

If all players perform on average according to their current rating with random variation in accordance with the distribution used to generate their new ratings, then the population rating distribution must necessarily diverge. The population distribution will assume a gaussian shape and will keep a constant mean, but the standard deviation of the population ratings diverges and will NECESSARILY increase without limit.

A stable distribution of the population rating is only possible if the higher rated player in each game consistently underperforms by a small margin. Otherwise the top ratings must inflate.

This is easy to prove with a simple simulation.

The reason for it is also known, and I'll find the links that help to put together the explanation if you are interested.

Nov-11-16
Premium Chessgames Member
  AylerKupp: <Tiggler> Yes, I'm definitely interested. And, if you needed further convincing, that's what the data shows Ė sort of. I calculated and plotted the ratings difference between the 10th ranked player and the 5000th ranked player, between the 10th ranked player and the 4000th ranked player, between the 10th ranked player and the 3000th ranked player, etc. down to the difference between the 10th ranked player and the 50th ranked player from the end of 1966 to the end of 2015. From 1989 onwards, the rating differences between the ranked players are increasing, with the greatest increase between the 10th and 5000th ranked player and the smallest ratings difference increase between the 10th and 50th ranked player. I think that this is what you would expect if the standard deviation is increasing.

But prior to 1988 the reverse is true; the ratings differences are <decreasing>, which is what you would expect if the standard deviation is decreasing. And this doesn't make sense to me. It may be an artifact of the lowering of the ratings floor; there was no 5000th ranked player prior to 1986, no 4000th ranked player prior to 1982, no 3000th ranked player prior to 1980, and so on. And, since each player's initial rating is not as accurate as subsequent ratings, maybe these initial ratings were not accurate. But the same effect is noticeable, although to a lower degree, to as small a difference as between the 10th and 200th ranked player; a decrease in the rating difference between 1968 (the first time that ratings were available for the 100th and 200th ranked players) to 1989, then an increase in the rating difference. I don't know what to make of it.

Unfortunately the new data exceeded the column limits of Excel 2003 so in order to see it you must have a copy of Excel 2007 or later. I'll upload a summary version of the large spreadsheet so you can take a look at the data if you're interested. I'll see if I can figure out how to post the summary information in a *.pdf file so that those that don't have access to Excel 2007 or later can see the data.

Nov-11-16
Premium Chessgames Member
  Tiggler: Got the message that you are interested, and I do remember my offer to put together the explanation.

Just now I'm pondering how well to fulfill that commitment. It deserves a well crafted dissertation. A few hints with invitations to further question is the minimum default.

The longer you have to wait for my response, the more you are entitled to a big effort.

Nov-11-16
Premium Chessgames Member
  Tiggler: Here is the first essential link: https://en.wikipedia.org/wiki/Marko...

Elo approach assumes:

(1) that there is a Markov process at work: there is a current state and a set of transition probabilities to the next state that depend only on the current state.

(2) that the current state is fully described by a set of numbers that are the rating of each player.

(3) that the transition probabilities are dependent only on the differences between ratings of the two players in each game. The probabilities do not depend on the location parameter: https://en.wikipedia.org/wiki/Locat...

Let's pause while I decide what is the next step and you digest that.

Nov-12-16
Premium Chessgames Member
  AylerKupp: <Tiggler> Donít trouble yourself, donít feel bound by your offer, and, above all, donít let it be a burden to yourself. I am interested but I am in no rush to find out the information. After all the current rating system will likely be with us for a long time.

So, yes, letís definitely pause and you can get back to it whenever you have both the time and the inclination. In the meantime, Iíll look at the link you provided and refresh my knowledge of Markov chains.

Nov-12-16
Premium Chessgames Member
  Tiggler: I need to insert some discussion of items (1), (2) and (3) of the previous post. Are these assumptions, approximations, or what?

I chose to view them as axioms. The rating scale is generated from them. For example, concerning (3) - why should we believe this it is the case that the transition probabilities associated with a game between a 2200 and a 2300 player are the same as those associated with a game between a 2700 and a 2800 player? We don't have to believe it, because we have asserted it as an axiom. Thus we can say that the rating intervals between pairs of ratings is defined as the interval that corresponds to a given set of transition probabilities. The scale is generated from this assumption, just as the Celsius temperature scale was defined by the requirement that the change in resistance of a platinum thermometer between the ice point and boiling point of water corresponds to 100, and therefore 200 Celsius is defined as that temperature which results in this same change when compared to the boiling point of water.

One snag, however, is that we do not have two fixed points! So instead we chose some arbitrary value of the expected game score between players of a given rating difference, which in turn defines the state transition probabilities of our Markov process.

Pause for thought ...

Nov-12-16
Premium Chessgames Member
  Tiggler: At this point it is time to mention two crucial examples of the Markov process: Wiener process and Ornstein-Uhlenbeck process. They were explained to me in 2012 by <Gypsy> on this page: Hans Arild Runde

Wiki's explanations are here:
https://en.wikipedia.org/wiki/Wiene... and https://en.wikipedia.org/wiki/Ornst...

Nov-12-16
Premium Chessgames Member
  Tiggler: And next we need to know about Martingales: https://en.wikipedia.org/wiki/Marti...

The feature of Martingales that makes them relevant to our discussion is this:

Consider the universe of rated games of chess as contests for rating points. If I want to bet on the result, I might use the "expected scores" that are the ones "predicted" by the rating procedure. If the actual <expected value> https://en.wikipedia.org/wiki/Expec... is equal to the one used in the rating procedure, then the contest for rating points has the expectation that the gain/loss of points by each player is , on average, zero. The contest is a fair game, and therefore the process is a Martingale.

Nov-14-16
Premium Chessgames Member
  Tiggler: Almost there:

If the stochastic process defined by chess games, chess ratings, and FIDE rating regulations is a Martingale, then we can invoke the Martingale Central Limit Theorem:

https://en.wikipedia.org/wiki/Marti...

This says that as the number of steps (games, tournaments, whatever) increases the change in ratings from initial values tends to a Gaussian distribution with zero mean and variance that is proportional to the number of steps.

This proves my statement made half a page above, AylerKupp chessforum

"If all players perform on average according to their current rating with random variation in accordance with the distribution used to generate their new ratings, then the population rating distribution must necessarily diverge. The population distribution will assume a gaussian shape and will keep a constant mean, but the standard deviation of the population ratings diverges and will NECESSARILY increase without limit."

Nov-14-16
Premium Chessgames Member
  Tiggler: There are many interesting corollaries, concerning, for example, rating floors; 400-point rule etc.

Also, I have not proved that ratings actually perform this way, because the proof depends on the assumption that players actually perform according to their ratings. If not, then the process is not a Martingale.

Before discussing an alternative, the Ornstein-Uhlenbeck process https://en.wikipedia.org/wiki/Ornst..., I'd like some feedback.

Nov-26-16
Premium Chessgames Member
  AylerKupp: <Tiggler> Sorry, but I've been busy with several personal obligations and I haven't had much time to devote to chess. And what little time I've had has been devoted to following the Carlsen - Karjakin match.

One obvious comment, players don't necessarily perform according to their ratings. In every tournament there are players who perform better than expected and players who perform worse than expected. If that wasn'tthe case then there wouldn't be a point of having tournaments or matches, the winners would be known beforehand.

So any "proof" must be probabilistically based on the spread of player's performance, and I'm not sure if that is possible or meaningful.

Nov-26-16
Premium Chessgames Member
  Tiggler: <AylerKupp>

It is obvious of course that players cannot perform exactly according to their rating "expected score", except is a statistical sense. What I said before assumes this:

<If the actual <expected value> is equal to the one used in the rating procedure, then the contest for rating points has the expectation that the gain/loss of points by each player is, on average, zero. The contest is a fair game, and therefore the process is a Martingale.>

Dec-24-16
Premium Chessgames Member
  Golden Executive: Merry Christmas and a Happy New Year 2017 to you and yours <AylerKupp>!
Dec-24-16
Premium Chessgames Member
  WinKing: Merry Christmas to you <AK>! :)
Jan-17-17
Premium Chessgames Member
  Golden Executive: Happy Birthday <AylerKupp>!
May-17-17
Premium Chessgames Member
  zanzibar: <AK> - idol curiousity, but is your avatar an image of a wine label?

Or what?

.

Jun-07-17
Premium Chessgames Member
  AylerKupp: <zanzibar> Yes, my avatar is a copy of a wine label from the Ayler Kupp vineyard in the Saar Valley, Germany. German wines are some of my favorites and the wines from the Ayler Kupp vineyard were one of the first German wines from the Saar region that I tried and liked very much.

Alas, the label did not come out too clearly. As <morfishine> remarked, it looks like a washed-up diploma. And he is right. I tried to make it sharper by changing all the gray pixels to black pixels but that was very time consuming because it was a pixel-by-pixel operation with the archaic software that I have. I never finished it and now, as a result of my oft-mentioned disk crash and data loss, I lost what little I had done. Maybe some day (doubtful) I will be sufficiently motivated to start over. Perhaps if I drink enough wine ...

Sep-01-17
Premium Chessgames Member
  cro777: <AylerKupp> If you have an idea what else can be mined from this data set:

http://blog.scottlogic.com/2017/09/...

Sep-20-17
Premium Chessgames Member
  AylerKupp: <cro777> Thanks for the link. I just saw this post since I don't check my forum very often. It's a lot to digest and it will take me a while since I have several other things to attend, but I'll get back to you, hopefully soon.
Jump to page #    (enter # from 1 to 53)
search thread:   
< Earlier Kibitzing  · PAGE 53 OF 53 ·  Later Kibitzing>

Bobby Fischer Tribute Shirt
NOTE: You need to pick a username and password to post a reply. Getting your account takes less than a minute, totally anonymous, and 100% free--plus, it entitles you to features otherwise unavailable. Pick your username now and join the chessgames community!
If you already have an account, you should login now.
Please observe our posting guidelines:
  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, or duplicating posts.
  3. No personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No posting personal information of members.
Blow the Whistle See something that violates our rules? Blow the whistle and inform an administrator.


NOTE: Keep all discussion on the topic of this page. This forum is for this specific user and nothing else. If you want to discuss chess in general, or this site, you might try the Kibitzer's Café.
Messages posted by Chessgames members do not necessarily represent the views of Chessgames.com, its employees, or sponsors.
Participating Grandmasters are Not Allowed Here!

You are not logged in to chessgames.com.
If you need an account, register now;
it's quick, anonymous, and free!
If you already have an account, click here to sign-in.

View another user profile:
  


home | about | login | logout | F.A.Q. | your profile | preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | new kibitzing | chessforums | Tournament Index | Player Directory | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | privacy notice | advertising | contact us
Copyright 2001-2017, Chessgames Services LLC