**This is a quantitative and comparative analysis of the Lasker - Schlechter World Championship Match (1910) and Anand - Kramnik World Championship Match (2008). For background to this, see the Luis Ramirez de Lucena (kibitz #8) page (the link goes to the first post that is the catalyst for the discussion). The aim of this Project is to provide some sort of detailed and objective comparison of the accuracy of the play by the world championship contestants of 1910 to that of the warriors that contested the 2008 match.**The first part of the project was to construct a method based on GM John Nunn 's engine blunder-check analysis of the Karlsbad tournament in 1911 (http://www.chess.co.uk/twic/jwatson...), but hopefully one that's more sophisticated and rigorous. Basic idea is to rate the quality of a game and a match by its mistakes. I take it as axiomatic that a result cannot occur without at least one mistake.

<The methodology> was as follows:

* each move of each game was subject to a <minimum> 16 ply evaluation as part of a continuous sliding analysis that started from either the first move or the last move of a game and proceeded to the last move or the first move of the game – depending upon which end of the game the initial engine evaluations commenced - without the engine being switched off

* the process was reversed once all moves the moves have been subject to an initial evaluation, with each move being subject to another set of 16 ply <at minimum> engine evaluations moving in the opposite direction. In other words, if all the moves were inputted and evaluated from the beginning of the game, then the process is reversed, with each move from the end of the game being evaluated back to the beginning of the game.

* each game was mapped in its entirety during one engine session to make the most of accumulated hash tables that are so useful for engine analysis

* once the raw data from the engine were assembled, errors were identified and weighed according to their seriousness.

Two separate weighting methods were used, defining errors according to their seriousness. These methods have been abandoned for future phases of the Project for reasons outlined below. However, as they are invoked during mapping of the 21 games of the 1910 and 2008 World Championship matches, they are described here for the curious, and incorporated into the game summaries that follow.

<Weighting method A>:

* a blunder was defined:

(a) as a losing move, ie: a move that shifts the position evaluation to greater than 1.40+- or 1.40-+, regardless of the proximate change in evaluation

(b) as a move that costs a win, ie: a move that shifts the position evaluation from greater than 1.40+- or 1.40-+, to below 1.40+- or 1.40-+, regardless of the proximate change in evaluation

(c) as a move that causes an engine evaluation shift of greater than 1.20, unless the game is a forced loss and the side with the superior position does not make a blunder that reduces the game position to below 1.40. Similarly, the loser in a position which is a forced loss will not be penalized for suicidal moves.

* a bad move is defined as a move that causes an engine evaluation shift of between 0.80 and 1.20 unless it met the definition of blunder.

* successive moves or short sequences of moves that accumulate an evaluation shift equivalent to a bad move (0.80-1.20) may be deemed a bad move if they are considered instrumental in causing a significant deterioration in a player's game equivalent to a bad move.

* inflated evaluations that may occur in some endgames will be dealt with on a case by case basis, depending on whether a result has occurred or would have in the normal course of events.

* default game weighting is <0>. Each blunder adds <2> and each bad move adds <1>.

<Weighting method B>

* a blunder is defined as:

(a) as a losing move, ie: a move that shifts the position evaluation to greater than 1.40+- or 1.40-+, regardless of the proximate change in evaluation

(b) as a move that costs a win, ie: a move that shifts the position evaluation from greater than 1.40+- or 1.40-+, to below 1.40+- or 1.40-+, regardless of the proximate change in evaluation

(c) a move that causes an engine evaluation shift of greater than 1.20, with the same caveat that applies to a blunder in <weighting method A>.

* a bad move is defined as a move that causes an engine evaluation shift of between 0.80 and 1.20 unless it meets the definition of a blunder.

* a dubious move is defined as a move that causes an engine evaluation shift of between 0.60 and 0.79 unless it meets the definition of a blunder.

* default game weighting is <0>. Each blunder adds <2.0>, each bad move adds <1.0>, and each dubious move adds <0.5> to a game's weighting.

* successive moves or short sequences of moves that accumulate an evaluation shift equivalent to a dubious move <will not> be deemed a dubious move.

* inflated evaluations that may occur in endgames will be dealt with on a case by case basis, depending on whether a result has occurred or would have in the normal course of events.

The Project has abandoned weighting method A preferring the more nuanced weighting method B as it accounts for errors generating evaluation jumps of between 0.60 and 0.80. Some consideration was given to introducing a fourth level of significant error, namely non-blunders generating evaluation jumps of between 0.40 and 0.60, but this was dismissed as being too unwieldy, not the least as engine evaluation jumps involving small intervals have less meaning, and because such small shifts are subject to significant engine revaluations depending upon the depth of analysis used - this was also an issue with the existing error margins, but they were dealt with by analyzing critical variations until there was sufficient confidence that the evaluations fell into the correct error intervals. A fourth level of error would have become unwieldy, requiring extravagant time and resources to generate sufficient confidence.

As you can see, some discretionary decisions needed to be made. A comprehensive discussion about the threshold issues and weighting methods used in this Project starts at this link: Bridgeburner chessforum (kibitz #302). It is interrupted by a 20 part posting of my findings of Game 6 in the Kramnik-Anand match of 2008, and then resumes.

<The engine used for this analysis>: was a Shredder 11 UCI installed on a Pentium 4 with a 3GHz processor and 512MB of RAM.

<Lasker - Schlechter World Championship Match (1910)>:

<The first game of the match> - is weighted at <<0>> (errors by either Schlechter or Lasker). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #27)

<The second game of the match> - is weighted at <<0>> (no errors by either Schlechter or Lasker). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #62)

<The third game of the match> - is weighted at <<0>> (no errors by either Schlechter or Lasker). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #138)

<The fourth game of the match> - is weighted at <<5.0>> (1 blunder by Lasker and 1 bad move and 1 blunder by Schlechter). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #45) and amended at this link: Bridgeburner chessforum (kibitz #190).

<The fifth game of the match> - is weighted at <<6.5>> (1 dubious move and 1 blunder by Schlechter, and 2 blunders by Lasker). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #168)

<The sixth game of the match> - is weighted at <<1.0>> (1 dubious move each by Schlechter and Lasker). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #200)

<The seventh game of the match> - is weighted at <<1.5>> (1 dubious move by Schlechter and 1 bad move by Lasker). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #35).

<The eighth game of the match> - is weighted at <<0>> as neither Lasker nor Schlechter made mappable errors. Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #229)

<The ninth game of the match> - is weighted at <<8.0>>, as there were two blunders each by Lasker and Schlechter. Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #254)

<The tenth and final game of the match> - is weighted at <<19.0>> as there are four blunders apiece by Schlechter and Lasker, plus 4 dubious moves by Schlechter and 2 dubious moves by Lasker. Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #264)

<Anand - Kramnik World Championship Match (2008)>:

<The first game of the match> - is weighted at <<0>> representing no errors by either Kramnik and Anand. Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #78)

<The second game of the match> - is weighted at <<3.0>> (1 bad move and 1 dubious move by Anand, and 1 bad move and 1 dubious move by Kramnik). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #107), and an amendment to that analysis starts here: Bridgeburner chessforum (kibitz #161) and includes the first three posts.

<The third game of the match> - is weighted at <<6.5>> (1 blunder by Anand, and 2 blunders and 1 dubious move by Kramnik).
Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #149) and is amended at the three posts starting here: Bridgeburner chessforum (kibitz #161)

<The fourth game of the match> - is weighted at <<0>> representing no errors by either Anand or Kramnik. Mapping and analysis start from this link: Bridgeburner chessforum (kibitz #279)

<The fifth game of the match> - is error weighted <<2.5>> (1 blunder and 1 dubious move by Kramnik). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #216)

<The sixth game of the match> - is error weighted <<6.5>> (2 blunders and one dubious move by Kramnik, and 1 blunder by Anand). Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #313)

<The seventh game of the match> - is error weighted at <<0>>, representing no errors by either Anand or Kramnik. Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #339)

<The eighth game of the match> - is error weighted at <<0>>, representing no errors by either Anand or Kramnik. Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #343)

<The ninth game of the match>- is error weighted at <<3.0>>, representing one bad move by Anand, and four dubious moves by Kramnik. Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #347)

<The tenth game of the match> - is error weighted at <<7.0>>, representing <one bad move> and <two blunders> by Anand, and <one blunder> by Kramnik. Mapping and analysis starts from this link: Bridgeburner chessforum (kibitz #361)

<The eleventh and final game of the match> - is error weighted at <<0.5>> because Kramnik played one dubious move. Mapping and analysis for this game starts from the post to this link: Bridgeburner chessforum (kibitz #373)

SUMMARY: Game mapping of the two world championship matches Lasker - Schlechter World Championship Match (1910) and Anand - Kramnik World Championship Match (2008) has been completed. Raw results are posted above, but there will be analysis of the results posted here in the near future.

<IMPORTANT NOTE>:

As mentioned above, both weighting methods A and B have now been abandoned, as the weighting values have no mathematical relationship, and cannot therefore be manipulated with mathematical operators, ie: they cannot be meaningfully added, subtracted, multiplied or divided (thanks to <alexmagnus> for his constructive critique in this regard).

This does not pose a hindrance to evaluating the accuracy of the games, as the essential characteristic of the methodology was to establish reliable engine evaluations of every move in every game, and to tally all errors that cause evaluation deviations of 0.60 or more.

Errors are now therefore divided into two categories: namely <blunders>, defined as moves that change the status of a game from a win or a draw, and <other errors> that change the confirmed evaluation of a position by more than 0.60. The <other errors> of each game of the two championships were then further subdivided using evaluation error "bandwidths" of 0.20, starting from 0.60-0.79, 0.80-0.99 and continuing through to 1.20-1.40. None of the games in the 1910 and 2008 championships featured <other errors> that caused evaluation jumps of over 1.20, even though it was technically possible to make such errors of up to 2.60-2.80 (between ∓1.40 to ±1.40).

Previously errors were given handles such as <dubious moves> and <bad moves> to provide an easy grammatical "tag". The "bandwidth" method described in the previous paragraph has now expanded the array of errors. I invite who has constructive suggestions about naming these errors according to their seriousness to do so, and if I adopt them, they'll win a special prize, namely they'll be accorded credit for coming up with the list.

Here is my preliminary list:

- a mistake that changes a game from a win or a draw will deemed to be a <blunder>, regardless of the proximate change in evaluation (both the label and the definition are non-negotiable)

- more than 1.20 - very bad move

- 1.00-1.19 bad move

- 0.80-0.99 dubious move

- 0.60-0.79 inaccuracy

- 0.40-0.59 minor inaccuracy (to be used in the next phase which is mapping the Lasker - Capablanca World Championship Match (1921) and the Kasparov - Kramnik World Championship Match (2000))

again with the caveat that if any move that features the above evaluation shifts causes a game status to change from won or drawn, then it will be deemed a <blunder>. Evaluation jumps of between 0.20 and 0.39 won't be considered to be errors, again unless they change a game status, as these fluctuations are within the range that occasionally occur in common and respected openings.

This is work in progress and feedback on methodology, data evaluation techniques, or any other aspect relevant to this Project should be addressed to my forum.

I would like to acknowledge the contributions of my partner in this project, User: visayanbraindoctor, who has helped, and continues to help, me to develop and refine this Project, and its aims, objects and methodology. He has also checked all my work, posted the results of the mapping exercises to all the relevant game pages and constructively engaged both sceptics and supporters of this Project - and continues to do so.