< Earlier Kibitzing · PAGE 57 OF 59 ·
Later Kibitzing> |
Nov-01-10
 | | plang: <Pawnsgambit: I would also include: Nezhmetdinov A wildcard in this list.>
Nezhmetdinov was very talented and played a number of brilliant games but his style is too one dimensional to include on a list of all time greats. He was not that skilled at positional play, defense or endgames. |
|
Nov-01-10
 | | tamar: <lostemperor> Curious that Anand's peak should be so late. Topalov's current woes suggest their can be a burnout factor when working with super computers that Anand has largely managed to avoid up until now. |
|
Nov-01-10
 | | alexmagnus: those linking to truechess: has anyone figured out how exactly the players are being ranked?:) |
|
Nov-01-10 | | percyblakeney: <has anyone figured out how exactly the players are being ranked?:)> Unconvincingly... |
|
Nov-01-10
 | | lostemperor: <tamar> yes I was wondering about that too. I think Anand lifestyle cause longevity. For all I know he can be at the top for another 5 years but top chess need so much energie that I think it favours youth:) I also think since computers made that list Anand learned with each year, like anyone else, to play as close as to what computers suggest! <alexmagnus> the list is measures only the deviation of the moves of the 16 champions with the computer moves evaluation (of a certain program), nothing more and nothing less. So it is not totally complete. It does not determine which player is greater! I think personally only if Anand gets a 2851 or more rating he can claim to be equal to Kasparov. It does make Lasker achievements some 100 years ago when chess technigues were just being understood, with his relative good standings very impressive in my opinion! |
|
Nov-01-10 | | Kinghunt: <alexmagnus: those linking to truechess: has anyone figured out how exactly the players are being ranked?:)> And how exactly did they decide that in his 40 decisive games against Kasparov, Karpov played worse than "the average grandmaster"? That smacks of utter BS to me and calls the whole method into doubt. |
|
Nov-01-10 | | percyblakeney: Looking at an analysis of Kasparov's combinations made with Crafty at 12 ply is like following a game at Chessbomb and seeing all the comments about mistakes when the 5 second analysis claims that a move isn't the best (often a move that Rybka recommends after much deeper analysis). |
|
Nov-01-10
 | | alexmagnus: <the list is measures only the deviation of the moves of the 16 champions with the computer moves evaluation (of a certain program), nothing more and nothing less.> Looking at the numbers behind the names, I don't see how rankings come out. Some players are tied, some have less mistakes of one type but more of the other etc... I played around with those numbers a bit and couldn't get the rankings as they are with any setting/wedighting which came to my mind. |
|
Nov-01-10 | | samikd: <acirce> <It's a mystery to me (well, maybe not a huge one) > :) My feelings exactly. |
|
Nov-01-10 | | Bobwhoosta: <superplexer>
Did you notice that you claimed that neither Fischer nor Tal should be in the Top 10, and then tied them for 9th place? |
|
Nov-01-10
 | | alexmagnus: Actually I noticed that in the greatest ever debates it is almost the rule: the harder the numer-based list is to understand, the more people are ready to take it as objective. People go to chessmetrics (which is quite tough to verify - on a conventional home computer probably even impossible within reasonable amount of time, but at least Sonas explains <how> he arrived at those data - while missing, intentionally or not, some points), or even truechess, which at first sight explains everything too but at a closer sight... how the heck does he rank them? |
|
Nov-01-10 | | Bobwhoosta: My personal, non-objective, opinion based, emotional, and completely right in every way list that I said I would never do (but can't help myself!!!): Kasparov
Fischer
Karpov
Botvinnik
Capablanca
Morphy
Alekhine
Smyslov
Steinitz
Tal
I relate these players in terms of their innovation, longevity, creativity, fighting spirit, how much I like them, and what I had for breakfast. This list due to change in the next hour, be there!!! |
|
Nov-01-10 | | Bridgeburner: <perceblakeney>
<Looking at an analysis of Kasparov's combinations made with Crafty at 12 ply is like following a game at Chessbomb and seeing all the comments about mistakes when the 5 second analysis claims that a move isn't the best (often a move that Rybka recommends after much deeper analysis).> As far as I'm concerned such shallow ply analysis doesn't produce data worth using when dealing with FMs, let alone GMs or Kasparov, except for the most obvious blunder checks. When I mapped and analyzed the 1910 and 2008 world championship matches, I used continuous 16 ply two-way (forward and reverse) sliding analysis from the first move to the last of each game, bolstered with lengthy deep ply variation analyses, and hoped that produced usable data. If it doesn't and proves in hindsight to still be insufficient, I can console myself with the thought that it is more reliable than other methods in producing worthwhile information. The 21 games of the 1910 and 2008 took 13 months continuous machine time using this method. None of this type of analysis can be left to machines alone, because powerful as they are, they are still idiots, and need to be guided by human intelligence. |
|
Nov-01-10 | | Bridgeburner: Postscript:
The basic result of my research indicates Lasker, Schlechter and Kramnik were basically equivalent accuracy (and therefore strength), in respect of the matches that were analyzed, while Anand was the clear winner. I have no comments about other World Champions, and won't have until I've analyzed their games as fully. Currently I'm engaged in mapping the 1921 and 2000 World Championship matches, but I won't have results for all the games until well into next year. |
|
Nov-01-10 | | bharatiy: may be this kind of project should be undertaken at a university lab where enough hardware is available or may be by IBM or AMD, wont require much money and will be done quite faster. Though your effort is appreciated <bridgeburner> its taking a lot of your and your PCU time. |
|
Nov-01-10 | | boz: But these computers can't tell you how good a move is. Mate in twelve is a simple matter to Stockfish 2.1 but is the machine handing out exclams to the master who plays such a combination? Or how does it measure the value of a subtle rook move the point of which only becomes obvious in the endgame? A slight up-tick in the eval? And what about ideas that are optically difficult but easy to calculate once they're pointed out to you? For some players their entire genius is in finding such ideas over the board. Are the Rybka using analysts taking opening innovations into consideration? How do you evaluate people like Reti and Nimzowitch who revolutionized the game? This business of reducing a player's worth to the number of errors he commits doesn't take into consideration style of play or the strength of the opposition. This is why the computers can only shed limited light on who the best players were. The best way to find out is to listen to what the Grandmasters themselves have to say on the subject. |
|
Nov-01-10 | | Bridgeburner: <bharatiy: may be this kind of project should be undertaken at a university lab where enough hardware is available or may be by IBM or AMD, wont require much money and will be done quite faster. Though your effort is appreciated <bridgeburner> its taking a lot of your and your PCU time.> Thank you for your kind words.
I'll just reiterate that there is no short cut for these sorts of projects...enormous processing power is useful (I'll settle for the Pentagon computer next time it has some spare processing time...) but whatever is used needs human guidance. I recommend people try my method on at least one game to see what's involved, as this would inform the debate. The important thing is to deep ply analyze each move and to ensure that engine evaluations are fully consistent with proximate valuations, especially after opening theory is passed. This takes a huge amount of machine time, but doesn't need to take as much human time because leaving a machine to compute while you're sleeping, having a picnic or going to work simply means a useful build up of hash files that assist analysis and verification of evaluations of other moves. Be prepared to take days or a week or more to finish the game...but don't switch the machine off during the game, as you lose those useful hash files that blaze a path for subsequent engine analysis and evaluation. Take into account engine weaknesses, have a chessboard nearby and be prepared to use common sense and all your chess knowledge to ensure the engine isn't going off the rails. While engines are useful in a general sense, this sort of engine analysis is only really useful if the outcome of each and every move is clearly consistent. Such internal consistency gives you some confidence that the evaluations are actually accurate. Inconsistent evaluations need to be eliminated (by further analysis) or logically factored into results (evaluation inflation that rears its ugly head in some endgames - the Bacrot-Anand game had engine evaluations of up to +7 in positions that were drawn in the B vs P ending). |
|
Nov-01-10 | | Bridgeburner: <boz>
When I first encountered engines, I was furious that they deconstructed favorite masterpieces. Since then, paradoxically, engines have imbued me with deep and enormous respect for the masters of the past and present. |
|
Nov-01-10
 | | alexmagnus: <I recommend people try my method on at least one game to see what's involved, as this would inform the debate.> <Bridge> On one game I did it once (took me 6 hours of analysis - and I did it <only> forwards, running 16 ply "stadard" and longer if the evaluation was "borderline". Since my own chess level is quite low (1600), I couldn't contribute with much human analysis, but I think I would recognize a "draw mistaken by the engine for a win" in those endgames (the game I chose ended in a middlegame though, so no problems there). BTW how comes that if the player makes the same move as the engine suggested, the evaluation remains the same in your analysis? I mean, if players did ten "perfect" (in these terms) moves in a row, the eval must change, as you end up not on the same ply as at the origin of the sequence of perfect moves (be it forward or backward). |
|
Nov-01-10 | | Bridgeburner: <alexmagnus>
I'd be interested to know which game you analyzed and what your results were! <BTW how comes that if the player makes the same move as the engine suggested, the evaluation remains the same in your analysis? I mean, if players did ten "perfect" (in these terms) moves in a row, the eval must change, as you end up not on the same ply as at the origin of the sequence of perfect moves (be it forward or backward).> If a player makes the engine’s preferred move immediately after a 16 ply analysis, the evaluation of that move will usually remain the same because playing the best move maintains the engine evaluation. But this is not always the case, as using a 16 ply evaluation on the next move is effectively turning it into a 17 ply evaluation of the original move. The further you move along the game, the more likely the engine's evaluation of the "best move" will change as you are expand the engine’s “horizon” by bringing into “view” variations that were previously over the calculation horizon. Quite often, when a large additional amount of information from later analysis is brought back to the original move, not only will an evaluation change, but the engine might change it's mind about its preferred move. That’s why I end up reversing the sliding process from the last move, so that each subsequent move can take advantage of the information that’s built up in previous evaluations both on the initial forward slide, and from the further information built up on the reverse slide. The reverse slide will change just about all the initial evaluations from the initial forward slide. Even these aren’t necessarily conclusive, if the evaluations don’t remain consistent throughout…that’s when variation/subvariation analysis may become necessary so that additional hash tables built up from analysis of variations between the actual moves made in a game can provide the necessary information to fully inform an analysis and therby stabilize evaluations. |
|
Nov-01-10
 | | alexmagnus: <I'd be interested to know which game you analyzed and what your results were!> I don't have that analysis now, I did it just for fun. It was one of Hammer's games from that tournament in Poland where he strongly underperformed. |
|
Nov-01-10 | | PinnedPiece: My prediction:
Magnus World Champ by 2015.
Norge triumferende.
Its due.
. |
|
Nov-01-10 | | Bridgeburner: <alexmagnus>
I mentioned in my last post that <[t]he further you move along the game, the more likely the engine's evaluation of the "best move" will change as you are expand the engine’s “horizon” by bringing into “view” variations that were previously over the calculation horizon.> A simple well known example of this is that an engine may give you a favorable analysis of a move but it stops just short of a lethal knight fork at the top of a variation tree just over the move "horizon", regardless of whether it's a 16 ply or 25 ply evaluation - no matter how deep the ply, there is always a horizon with who knows what dwelling just beyond. As you slide the engine along the game's moves, that fork (and other previously unexamined moves) will be eventually factored in and will change the existing evaluation. It can only change the earlier evaluations once you slide the engine back to re-analyze and re-evaluate the earlier moves. The value of the sliding process is that you're constantly updating the engine's calculations with more information on a cumulative move-by-move basis. The value of the reverse slide is that it has all the information from the initial slide and uses the information gathered all the way from the end of the game to update its original evaluations, and sometimes its preferred moves. The reverse slide also provides information additional to that acquired on the initial forward slide because the engine's heuristics are constantly gathering fresh information additional to that which has already been acquired. It's critically important that the engine remain on for the whole process until game analysis and evaluation is complete. |
|
Nov-01-10 | | suplexer: Continued motivation not the age determines the age of decline for chess players. kasparov was 2813 rating when he retired at 42. and he only retired not because he was declining but he was bored of chess and couldn't have the world championship match he wanted quick enough. Do you think kasparov would have retired if he was world champion? i doubt it. Even botvinnink was world champion in mid 40's and also Emmanuel lasker. I can assume decline only stars when the world champion of the day stops put in the collusal ammount of practice that they used to, or gains other interests off the 64 squares. |
|
Nov-01-10 | | researchj: Check out this one: www.magnuscarlsen.com |
|
 |
 |
< Earlier Kibitzing · PAGE 57 OF 59 ·
Later Kibitzing> |