< Earlier Kibitzing · PAGE 4 OF 4 ·
|Dec-16-11|| ||kingscrusher: <polarmis> Hope you like this video:|
|Dec-16-11|| ||frogbert: i certainly agree with kingscrusher about the relative trend; on icc i basically censor all the "patzers" who keep making silly remarks about blunder this blunder that when watching elite chess - i'm simply annoyed by their obvious lack of understanding even if i think that i shouldn't care. ;o)|
here on cg.com there are numerous representatives of engine-ridden kibitzing, with little to no understanding of the non-played lines that may back up or complicate the truth regarding the "simple" engine line "they" advocate.
but it spreads everywhere. during the grand slam final we had the perfect example in the game vallejo-carlsen where carlsen eventually decided against 26... Rd3! which was winning in this position:
click for larger view
now, even chessbase managed to give the following "analysis line" (which happens to be the engine's recommended line here): 26... Rxd3 27. Qxc7 Rxd1 (diagram) 28. Rxd1?? Qxd1
click for larger view
i mean, which human player in their right mind exchanges on d1 here to reach a dead lost position a piece down? if carlsen could've known that white played "engine-perfect" and that the latter meant playing Rc1xd1, it would've taken carlsen about 10 seconds to decide on 26... Rxd3 - he knows how to win a piece up.
this is an example of the kind of rubbish "game commentary" that too strong engines might lead if used uncritically. the problem is that going sufficiently deep, the engines saw that white's only serious try (which carlsen failed to calculate to a clear position, which includes making sure there isn't a perpetual some 20 moves down the road), 28. Qb8+ Bf8 29. Rc8 leads to more material loss than the rook exchange eventually, even if it short term wins back the piece:
click for larger view
this is the position that chessbase should've analyzed for its readers, if they had any objective of explaining why carlsen didn't go for Rxd3. i tried to take on the task on the carlsen page, see
Magnus Carlsen for some lines and themes.
here i'll just show one line:
click for larger view
this is from the critical line after white's 28. Qb8+ and it's black to reply to 36. Rxg6! in order to find a clear win, carlsen would've had to see 36... Re1!! here - or be able to see that 36... hxg6?! 37. Qh8 Kg5 38. Qd8+ Kf4 39. Qd6+ Kxe4 40. Qc6+ lets black escape the checks, but <only> by giving up the rook in the following position (a funny exercise is to set up this position and let houdini run until it shows more than +5 for black, without the use of tablebases - but remember to clear the hash table first, so that it hasn't stored knowledge of the right pawn ending):
click for larger view
eventually this leads to this pawn ending with black to move:
click for larger view
in summary: in order to actually *see* that 26... Rd3 isn't a draw by perpetual, one has to calculate to this ending reached around move 45 (or see the impossible 36... Re1!! move above) and make sure it wins for black, even a pawn down. that's how "clearly" 26... Rd3 was winning. (compare the first and the last diagram given here...)
|Dec-16-11|| ||AylerKupp: <kingscrusher> Chess is a very difficult game and if Super GM mistakes are found after the fact that does not (or should not) demean their accomplishments in making so few mistakes during the game. And I don't see much difference between a patzer like myself using an engine to provide an evaluation of certain moves within a game and a not-so-Super GM (with or without engine support) performing a leisurely analysis of the game after the fact.|
I may be naïve but don't see anything wrong in having more and more ordinary people being more critical of Super GM play as long as it's done in the proper context. People like myself who would probably lose to Super GMs even at queen odds have no recourse but to resort to engine analysis since otherwise our judgment and opinions would not be worth much, and rightly so. I am under no illusions about my chess playing capabilities and, whenever I post an engine's analysis, I make it clear that it is indeed an engine's analysis and not my own, and I cite the engine used and the search depth that it achieved in reaching its evaluation.
Sure, there are those who may have an overblown opinion of their "achievement" in analyzing a game using an engine and finding better moves than those played by a Super GM in a game, but these attempts at insight are easily seen and ignored if desired. I hope that this phenomena is a result of the relative newness of easily available strong engines and powerful computers and that these examples will disappear or at least will be greatly reduced as the novelty of the "achievement" wears off. Time will tell.
Oh, and since we're on the subject of empathy, it's possible (at least for me) to feel empathy towards chess engines. See this recent post of mine: Anand vs Nakamura, 2011.
|Dec-16-11|| ||King Death: < AylerKupp: Chess is a very difficult game and if Super GM mistakes are found after the fact that does not (or should not) demean their accomplishments in making so few mistakes during the game.>|
It's good to read a reasonable, balanced thought about this. Too much of what I see here and on pages like the daily puzzle smack of one upmanship more than anything. Just another variation of "mine's bigger than yours!"
<...I may be naïve but don't see anything wrong in having more and more ordinary people being more critical of Super GM play as long as it's done in the proper context. ..>
Same here, but the problem comes in the way it's often presented.
|Dec-17-11|| ||kingscrusher: <frogbert> Thanks for that - I was wondering at the time how Carlsen had apparently missed a simple tactic. And now I see from your analysis that it wasn't that simple. |
Because of all the commentaries which seem to have been driven by engines, I guess we ended up all being brainwashed that Carlsen missed something quite simple.
Maybe the only way to keep a connection with reality is for the players to be interviewed after their games like at the London Chess Classic which I enjoyed being at recently.
Somehow our perceptions of reality as well as what the player's thoughts were during the game are becoming increasingly badly deduced because of our engine tools. It's a weird sort of truth based on brute force - that is actually a "Black box" that often bears little relation to human type analysis.
|Dec-17-11|| ||kingscrusher: <Aylerkupp> The Carlsen example provided by Frogbert is a perfect example of the kind of delusions spread and propagated from an engine perception of a game. I really up until Frogbert's analysis was under the impression that Carlsen had made an obvious tactical blunder. |
If kibitzers on Chessgames.com actually want to learn from Master games rather than the "search for ultimate truth" then I think one important thing may be the players being interviewed after games, or their own annotations.
The "ultimate truth" unfortunately could be like "42" from Hitchhikers Guide to the Galaxy. It may be presented as the Ultimate Answer to Life the Universe and everything - but what does it actually mean?! It seems a mysterious black box if it is from our engine tools - at least a lot of the time.
|Dec-17-11|| ||polarmis: <kingscrusher>, as you sort of asked... :)|
The video on the end of Kramnik - Howell struck me as a little misguided. You talk about truth/empathy, but it's not a good example. People purely looking at engine evaluations wouldn't have noticed much that was amiss - after all, even with the Qf1 move the engine still gives a win for Kramnik. The second point is that the people who I saw point it out all noted it was a hard move to see - no lack of empathy there.
You also make a point about how moves like that pointed out by the computer mean people need to alter their style and be less emotional. There's a grain of truth in that, but it's nothing new. People have always discovered flaws in nice-looking games/ideas, and especially say in the second half of the 20th century it's been approached systematically and seriously. Of course computers massively accelerated the process and make it accessible to all.
One thing I would say, though, is that although you talk about Kramnik's emotional approach you'll have seen (as I know you just watched his post-game demonstration), that he actually calculated a vast number of complex lines before deciding on many of the moves in the game. You criticise engines and some kind of barren search for truth (though that was what Botvinnik, Rubinstein, Kasparov and co. were all looking for)
- but actually the only hope any of us sub about 2700 have got of grasping what's going on in Kramnik's mind - approaching any kind of real empathy! - is by intelligently using engine analysis (just as, say, Frobert does to point out Carlsen didn't miss a simple chance). Vague generalisations might be more human, but they're not closer to how an actual top player thinks.
|Dec-17-11|| ||kingscrusher: <polarmis> Because of the Chessbase article written from an engine perspective and it's echoing around the Internet, aren't most people now of the belief Carlsen made a simple tactical blunder?!|
|Dec-17-11|| ||kingscrusher: Sorry just to clarify : The Carlsen game in question is:|
Vallejo-Pons vs Carlsen, 2011
Francisco Vallejo-Pons vs Magnus Carlsen
4th Bilbao Masters 2011 · Zukertort Opening: Kingside Fianchetto (A04) · 1-0
I assumed most people still think Carlsen made a simple blunder. Thankfully when interviewed about the game he said he couldn't see a win and tried calculating very far down.
Yet most people's perception (lack of empathy it seems) is that Carlsen made a simple blunder.
Isn't this a clear case of bad perception caused by the propagation of engine-perspective analysis?!
|Dec-17-11|| ||polarmis: <kingscrusher>, Carlsen did make a simple tactical blunder in that game, but it came a few moves later. I was just checking what I wrote at the time: http://whychess.org/node/2087 - and I think I escaped the trap of suggesting there was anything trivial with 26...Rd3! - though I did quote Garry Kasparov, apparently without a computer, saying <I know I'm just an old retired player, but wasn't 26…Rd3 winning immediately in Vallejo-Carlsen?>. So there's a human propagating something which engine analysis might help "correct" :) I also quoted Carlsen's blog.|
Incidentally, it's ChessVibes not ChessBase that gives the line Frogbert mentions, though ChessBase is often the most guilty of shallow computer analysis, and gave the decent move 30...Qxe4 two question marks (!). In defence of chess journalists everywhere, though, writing sensible and almost immediate reports on chess as people do nowadays would be pretty much inconceivable without engines - as would a player of your (higher than my!) level producing instructional videos.
Again, it's a question of making good use of engine analysis - which doesn't always happen, but on the other hand doesn't always not happen :) A case in point being the Anand - Nakamura game which started your crusade. Lots of people sensibly pointed out the engine analysis (shave off a point or so for it being the KID) was a fair reflection of reality, which an in form Anand would most likely have proved.
|Dec-17-11|| ||frogbert: but also chessbase' gm commentary failed to explain how 26... Rd3 wasn't trivial - and maybe kasparov simply revealed himself as an engine abuser too, while he likes to give the impression that he blindfold and in mere seconds can improve on the play of the current elite, including carlsen's? |
(i don't trust kasparov blindly - i feel he's become the biggest back seat driver in the chess world, trying to polish a flawless exterior; as coach he takes credit for wins but no responsibility for losses, and as commentator he can use engines but pretend it's all "his" original ideas and effort... where he is now, he's detached from the world of otb pressure and mistakes. ideal in many ways, right?)
actually, i don't think a kasparov looking independently on the position in that carlsen game would fail to see white's option of Qxc7 and Qb8+ and so on, while it's obvious that if he did, he wouldn't be able to analyze it to its conclusion in a flash or consider it trivial afterwards. as my computer-aided analysis shows: the position was a mouthful and then some; of course, it's possible to miss white's Rxg6 resource, thinking it's over after 35. Qe5+ f5 but anyway.
still, i think my position might be slighly closer to that of polarmis than of kingcrusher's. i do think the 3 of us are mostly in agreement regarding this topic. :o)
|Dec-17-11|| ||Trouble: kingscrusher...as far as I'm aware, the computers made the leap from about a 2750 level to a 3200+ level in the mid to late 2000's(years) for two reasons not mentioned in that vid you posted. The vid focuses on alpha beta, but real engines use the nearly equivalent but more precise negasearch with iterative deepening + null move pruning + killer move search + table lookup combined with good move ordering all of which is fairly old at this point...the newer engines rely on statistical analysis(especially regarding the proper evaluation of material imbalances in different game circumstances ie Q vs R+N+P endgames for example) which lead to a more sophisticated and more flexible heuristic--most exemplified in Rybka. Another cool new technique is known as 'forward pruning' which is actually quite similar to how humans calculate positions...Forward pruning is what allows some of the newer engines like Stockfish and Houdini to calculate to 20+ plies per position, despite no appreciable increase in evaluation speed...not that that would matter anyway since improvements in eval would result in linear time gains while improvements in tree pruning result in exponential time gains...to summarize, traditional engines won't be able to calculate to 20+ plies for MANY years, but the newer engines can, and thats why they kick butt.|
|Dec-18-11|| ||AylerKupp: <Trouble> Forward pruning, like some positions, is double edged. True, by selecting only the most promising starting moves to search the breadth of the search tree is cut down considerably and the engines can search deeper in a given amount of time. But unless these most promising moves are properly selected then the probability that the most promising move is not included in the initial search subset is increased, causing the engine to generate less than optimum evaluations. The most common heuristic is to select the moves to be investigated on the basis of the moves that yielded the best evaluation in previous searches, but there is obviously no guarantee that moves that were evaluated as being the best at one ply will also be evaluated as being the best one ply deeper, so it's kind of a hit and miss proposition.|
So it's a tradeoff. Engines such as Stockfish and Spike aggressively prune their search tree and reach deeper depths in a given amount of time. Engines such as Rybka apparently do not prune their search tree as aggressively and therefore do not reach as deep a search level, but they consider more moves. Houdini falls somewhere in the middle between those engines in terms of seach depth achievable in a given amount of time. Which search tree pruning strategy is superior usually depends on the position, and that is usually not known a priori, making definite conclusions difficult. And, given the relative performance of Houdini, Stockfish, Rybka, and Spike, determining which approach is clearly superior is inconclusive, particularly when you muddy the waters by factoring into the equation each engine's different evaluation function.
And I would say that negasearch is identical to alpha-beta pruning but slightly more efficient to implement recursively since it can use the same algorithm but just negate the evaluation result in alternate search plies.
|Dec-18-11|| ||kingscrusher: <AylerKupp:> I recommend if you are not aware of the Sci-Fi Novel to check out "Hitchhikers Guide to the Galaxy" - Douglas Adams when doing the story about the computer "Deep thought" - which was one of the reasons why the early "Deep Blue" computer was so named, did already think about the consequences of getting a computer to come up with an "answer" - and the computer after spending millions of years came up with "42" as the answer to life, the universe and everything. Yet the answer was itself a deep "black box". |
Here we are in similar situations with every GM game being put under engine scrutiny, except instead of taking millions of years, it takes seconds to come up with answers like 42 - which seem to show how apparently rubbish the GM's are. Instead of being "42" the answers involve these very strange lines of analysis that no human would often ever come up with even if given days to analyse the position.
The end result is a complete lack of sympathy being created even for GMs over 2700 - the "Super GM's" and the apparent silly blunders that Carlsen is capable of making as demonstrated below.
<Polarmis> makes a good point about the intelligent use of computer analysis, but actually I wonder if games need to actually be annotated by strong players without reference to engines - to have some clue first from a human perspective. Or for the players themselves to talk about the games a bit, like Kramnik did recently for his win vs Howell.
Only then after check with engines and compare the analysis. Only in such a manner, can maybe the side effects of lack of empathy be reduced for the new games being put under the engine microscope.
|Dec-18-11|| ||AylerKupp: <kingscrusher> I am very familiar with Doug Adams' "Hitchhikers Guide to the Galaxy" series and I regularly used many of his sayings during my career. For instance, whenever I left a job or an assignment I usually left a note in the office that I vacated with the note "So long, and thanks for all the fish." And I once developed a missile data link protocol based on a communications chip that supposedly implemented IBM's HDLC data link protocol standard but which in reality had so many differences from the standard that I changed the protocol's name to MDLC (Missile Data Link Control) and described it as "a data link protocol almost, but not quite, totally unlike HDLC."|
Alas, I'm still trying to throw myself to the ground and miss, with the resulting black and blue marks all over my body. ;-)
But, again, my point is that in chess we should be seeking "truth". If an engines' analysis is "inhuman" but better than the moves uncovered by a human analyst, isn't that objectively better? (sorry, but I couldn't resist using that word). And what I have found is that engine analysis more often than not reinforces the good judgment of the players, at least the strong players, so my admiration of them is increased, not lessened, particularly when I factor in the time factor and occasional time trouble that the GMs have to face. True, some (many?) mediocre players like myself use the engine analysis to gloat about the GMs mistakes, but I think that's more an indictment of these immature mediocre players than the use of chess engines or the engines themselves. And engines regularly come up with whoopers near the end of their search depth or when they find themselves in a difficult situation and try to push the "bad news" beyond their search horizon. Don't you have any empathy for the engines then?
But I do agree that ideally games should be annotated by strong players although I don't think that concurrent judicious use of engines detracts from the annotation. Humans and machines often complement each other and best results are usually achieved when both are involved. And you're right that nothing matches being able to listen to a GM explain their thought processes during the game.
|Dec-18-11|| ||kingscrusher: <AylerKupp> and <Polarmis>|
You guys seem balanced engine analysts getting more benefit than any delusions from analysis.
Thanks to you both for your balanced arguments about the use of engines.
I think you guys will be forgiving of Naka's wins in complex positions more than the less balanced engine analysts out there.
Thanks for presenting your arguments here.
|Dec-18-11|| ||frogbert: <I think you guys will be forgiving of Naka's wins in complex positions more than the less balanced engine analysts out there.>|
i strive for objectivity, also in the meta-commentary to chess games. where i think some of the comments to anand-nakamura went off along a somewhat misunderstood tangent, was when it wasn't only claimed that a) "it was only a win in engine terms", and b) "winning the won position needed 'unhuman' play on white's behalf".
so, while i absolutely agree on your general crusade on dumbening use of engines, both by rank amateurs and gms doing commentary, i think it's important to choose the right examples in order to drive the point home. as such, i don't think the anand-naka game was ideal.
<what I have found is that engine analysis more often than not reinforces the good judgment of the players, at least the strong players, so my admiration of them is increased, not lessened, particularly when I factor in the time factor and occasional time trouble that the GMs have to face>
aylerkupp, i share that sentiment. but often it takes the right "attitude" to begin with; understanding how insanely strong the elite players are, my default hypothesis is that there's a good reason when a top player "misses" something and that what's being touted as "simple" by engine-heads usually isn't.
|Dec-18-11|| ||AylerKupp: <kingscrusher> I think that the ready availability of more capable engines and more powerful computers is still a relatively new phenomenon and people are still getting used to it. Hopefully in a few years when the novelty wears off the amount of undeserved criticism will go down.|
And I have never been critical of Nakamura's or any other GM's wins in complex positions where they managed to out calculate, out evaluate, and sometimes "over luck" their opponents, particularly when they were responsible for creating the conditions that allowed them to turn a certain loss into a win or draw. After all, my favorite all time player is Tal, and I already mentioned to you one of my favorite games: Portisch vs Tal, 1964.
|Dec-19-11|| ||FSR: The opening is interesting. Adams played 6...Be6! to avoid 6...Be7 7.Bxd5! Qxd5 8.Nc3 as in Carlsen vs Wang Yue, 2010. As far as I can tell from CG.com's database, 6...Be6 was introduced by my doppelganger in Mongredien vs Morphy, 1863, but not played again for 105 years! Games Like Nakamura vs Adams, 2011 After 7.Bb3, 7...c5!, introduced in 2006, was another refinement, avoiding 7...Bd6 8.c4 Ne7 9.d4, as in Bronstein vs I Zaitsev, 1969.|
|Dec-19-11|| ||Ulhumbrus: An alternative to 15...Bxd5 is 15...cxd5 blockading White's d4 pawn.|
17...a5 disturbs the Queen side pawns. An alternative is 17...Bf5 clearing the e file so that on 18 Bb2 Black can play 18...Re3
|Dec-21-11|| ||Ulhumbrus: One justification for 4...c6 instead of 4...Nf6 is that on 5 Bc4 Black can play 5...cxd5 whereupon White has to move his king's bishop a second time and so lose a tempo for development.|
|Dec-21-11|| ||King Death: <Ulhumbrus: An alternative to 15...Bxd5 is 15...cxd5 blockading White's d4 pawn...>|
Adams may have have rejected this because it actually makes White's task of getting the queenside moving a little easier after 17.c4.
<...17...a5 disturbs the Queen side pawns. An alternative is 17...Bf5 clearing the e file so that on 18 Bb2 Black can play 18...Re3>
The move 17...a5 (as far as this weakie can tell) envisioned his plan of ...Ra7-e7 plus ...f7-f6, with the idea of playing against the pawn at b4 later with ...Rb7 and guarding against any accidents on g7. If Black allows Ne5, it isn't as easy to protect his outpost at f4.
|Dec-22-11|| ||Ulhumbrus: <King Death> I think that you are right. The move 18 Ne5 suggests an answer to the question of why Adams does not play 18...Nf5. The move ..Bf5 displaces the bishop from its task of defending the c6 pawn so that 18 Ne5 comes with tempo by attacking the c6 pawn.|
After 17...Bf5 18 Ne5 is the capture 19 Nxc6 then in fact a threat? On 17...Bf5 18 Ne5 suppose that Black ignores the attack on c6 and plays 18...f6 all the same. On 19 Nxc6 Qb6 20 d5 secures the N and on 20...Re4 21 b4 threatens 22 c5. On 17...Bf5 18 Ne5 Bxe5 20 dxe5 g5 21 e6! opens the long diagonal before Black can blockade the e5 pawn which obstructs the long diagonal.
It looks as if Black has to keep his bishop on the long diagonal until the Ne5 creates no longer a double threat on Black's c6 and f4 pawns.
|Dec-26-11|| ||Trouble: <AylerKupp> yes sorry I meant nega-scout instead of negasearch. It strikes me as curious that Rybka and Stockfish can have analogous performance levels despite their very different computational approaches to chess as you point out. Actually, I'm working on something you might call a 'probalistic tree expansion' for a board game which has a much larger branching factor than chess. Basically, the idea is to get quantitative results in regards to the quality of play for different degrees of tree pruning harshness in specific types of positions, by matching engines up against each other(maybe in a cloud network) and storing the results in a table that maps an N-tuple of positional characteristics coupled with a pruning technique to outcomes. My theory is that different types of positions are optimally expanded with different types/degrees of tree pruning which is a topic you talked a little bit about in your post.|
|Jan-30-12|| ||wordfunph: "It's always fascinating when the King’s Gambit is played by a top player, even more so when he actually wins. But, as chess has its strict rules, there are always a few blunders needed for such a miracle to happen."|
- GM Anish Giri
Source: Chess Vibes Training #36
< Earlier Kibitzing · PAGE 4 OF 4 ·