< Earlier Kibitzing · PAGE 5 OF 5 ·
|Mar-24-09|| ||SamAtoms1980: <<Whitehat1963>: http://news.bbc.co.uk/2/hi/americas...|
File under either:
Signs of the Apocalypse>
Yikes. Like we haven't seen proclamations like this dozens of times before. The picture in that article scares the living bejabers out of me. I don't know about the rest of you but I have no interest in letting those Sentinel-esque microbots anywhere near my bloodstream.
|Mar-24-09|| ||slomarko: what the @#$% means "computer"? which computer? why don't we then say
the match "Human vs Deep Blue"?|
|May-09-09|| ||myschkin: °°°
In the beginning …
"Thank you for an interesting game."
|Jun-10-09|| ||myschkin: . . .
|Jul-02-09|| ||acirce: <Thrajin> etc, I don't know how far this has been tested, but I seriously believe that Rybka in a match at classical time controls from the normal initial position but without any opening book at all would not lose to anyone. It would hardly do any worse, on average, than to get equal positions with White and slightly worse positions with Black. From there, it would go on to outplay the human more often than the opposite. If that's the case, certainly no need to experiment with Fischerandom or whatever.|
|Jul-02-09|| ||waustad: <slomarko>This gives a place to make generic computer comments. I've never had any luck getting a tech question answered in the Cafe. maybe this will work.|
|Jul-02-09|| ||Thrajin: <Knight13>,
Please read my original post. If you'll notice, I said that the computer would win in a game of Chess960. You seem to have misread or misinterpreted my post.
Thrajin: <Does that mean that you believe that a Grandmaster could defeat a computer in a Chess960 match with relative ease? After all, the human player would understand opening principles, even if the pieces were jumbled, while the computer would have no foundation but its brute force calculating ability.
Despite this, I think Rybka would wipe the floor with most human GMs in Chess960 without the use of an opening book. I could be wrong.
Any opinions on this?>
|Jul-02-09|| ||Thrajin: <acirce>, you are probably right. Kasparov played a much, much weaker computer in Deep Blue than you will find these days (Rybka, Shredder, etc.).|
I think the only way a human could win is if they entered an opening line that had been analyzed extensively, say like the Poisoned Pawn variation in the Sicilian. This would at least ensure that the human would not be likely to falter early on.
But even that is a big "if". Computers are far too powerful these days for all but the best players, and even the world's greatest human minds struggle to equalize, much less win.
|Jul-04-09|| ||whiteshark: Quote of the Day
" In the past Grandmasters came to our computer tournaments to laugh. Today they come to watch. Soon they will come to learn. "
-- Monty Newborn (1977)
|May-28-10|| ||Oliphaunt: http://www.weebls-stuff.com/songs/S...|
|Jun-04-10|| ||izimbra: The computer program that comes with the free games in Windows 7 is a fun throwback in the sense that most intermediate level players should be able to reliably thrash it in blitz games at its "strongest" setting.|
|Dec-29-11|| ||whiteshark: <Critter> by <Richard Vida>, anyone?|
|Jan-25-12|| ||whiteshark: Welcome my son... Welcome, to the machine.|
|Apr-12-12|| ||OhioChessFan: http://cache.gawkerassets.com/asset...|
|May-06-13|| ||Annie K.: This just belongs here. ;)
|May-07-13|| ||pbercker: Anyone following the TCEC tournament?
Final match ... game 39 of 48 ...
Houdini 3 (3121) vs. Stockfish (3108)
Houdini up by one point!
|May-13-13|| ||Capabal: <pbercker>
Regarding the measurement of complexity, I came to this study by Charles Sullivan in a link offered by <nimh> in the Capablanca page. http://www.truechess.com/web/champs...
In that page, Sullivan analyses the full series of K-K games in their five World Championship matches, a total of 144 games.
When quantifying the “average complexity per position” for each player in those games, the values he comes up with are 25.88 for Kasparov and 16.98 for Karpov.
Here is what puzzles me: how is it possible that two players <playing each other>, and therefore dealing with the same games and the same positions (differentiated only by one move) are facing such a huge difference in the complexity of the positions they deal with? This difference of nearly 9 points is very large, if you consider that in his list of “10-year representative complexity scores for the Champions” the total range is just over 15 points (between Karpov and Steinitz): http://www.truechess.com/web/player...
But how can the complexity be so different between two players that are dealing the same games? This seems to imply that after Kasparov makes a move, the complexity of the position is (on average) <reduced> by 9 points, so that Karpov faces a much simpler position. And when Karpov moves, he increases the complexity for Kasparov by an average of 9 points.
I can’t even begin to understand how this is possible. Either I am overlooking something very basic, or this is just sheer nonsense.
This is not related to the broader question of what adjustments, if any, should be made for overall diferences in average complexity when analyzing a player’s games against all his opponents. It’s only related to the simple question of how the positions in the games played between a given pair of players, can be systematically much more complex for one of them than for his opponent. I posted this same question at the Bridgeburner page, and he agrees it doesn't make sense. What am I missing here?
|May-13-13|| ||nimh: It's a natural phenomenon that difficulty can be quite uneven for either sides of the board. I have witnessed this in many occassions while analyzing games and determining average error and the difficulty of positions. The fact that two positions are very similar doesn't automatically mean that their difficulty would be similar.|
A position can be difficult in many ways. The methods hat have been used to describe it so far, I believe, only do it partially; they don't encompass all aspects of the difficulty as a whole. So, before we can draw a definite conclusion that Kasparov indeed had more difficult positions, a further research is needed.
|May-13-13|| ||pbercker: @ <capabal>
I don't completely understand the answer, but the FAQ addresses practically the very question you ask!
|May-14-13|| ||Bratek: The advent of inexpensive, commercially available chess engines and databases has had a negative influence on chess in recent years. Computer-chess does not inspire us. A computer does not distinguish between a brilliant move and an ordinary move. How can it? Makers of such chess programs don't even take the creative process into consideration. Artificial intelligence (AI) went out the window a long time before...|
So it is no wonder chess is becoming increasingly marginalized in the media, chess columns are dropping like flies and sponsors are becoming fewer and harder to find. We are in danger of using technology to hurt the very integrity of the game and to play the game less like the humans we are.http://kevinspraggett.blogspot.com/...
|May-14-13|| ||pbercker: @ <bratek> Whether or not computer-chess inspire us is not that important, so long as chess continues, as it has in the past, to inspire us via its best champions, of which there are many, both living and dead. |
What would be worrisome is if computer-chess were despiriting and somehow made it seem pointless to even bother to play chess since even the super grandmasters are easily defeated by them.
But there is no sign that computer chess has done so. Computer chess is increasingly seen as a chess tool for analysis both of one's own games and of the games of the greats. To the extent that it is such a chess tool, it can help us diagnose our weaknesses in our own games and help us locate where and how we went wrong at the last tournament. In that sense, computer chess most definitely inspires us to do better in the next tournament.
A computer need not distinguish between a brilliant move and an ordinary one. That's an aesthetic judgment that human beings make, and it is certainly not in the least bit hindered by chess engines. The engines give you a short list of the best possible moves, but you can decide which is most aesthetically pleasing.
Chess engines are not designed to model the human creative process. They are designed to play the objectively most accurate chess possible. My food processor also does not take the creative process into consideration; but then it was not designed to do so; neither are chess computers.
As far as the presence of chess in the media I cannot say for sure in part because the media is going through its own changes. it isn't so much as chess columns dropping like flies, but rather that "old" media outlets are themselves dropping like flies.
And the "new" media that is replacing it is still itself developing.
One thing is clear, however, namely that chess has a very clear presence on the internet in all sorts of ways. The easiest way to see that is simply to google the word "chess" and see the results. Or look up "chess" in wikipedia and start following the links.
So I must confess that I do not see how chess computers have a negative influence on the game.
|May-15-13|| ||Capabal: <pbercker: @ <capabal>|
I don't completely understand the answer, but the FAQ addresses practically the very question you ask!> http://www.truechess.com/web/champs...
I could see complexity related to the distance between what the computer evaluates as the best move, and what it evaluates as the 2nd, 3rd etc. best moves. So that the shorter the distance is between the “best move” and the others, the more complex the position may be said to be.
Now, finding ways to adjust for these differences in complexity when evaluating the “accuracy of play””or whatever it is we call it, is where the problem lies. The adjustment seems unnecessary in the sense that the extent of the deviation from the “best move” is already present in the raw evaluation. I'll try to explain.
Assume two positions, A and B.
Let’s say the best move in position A is evaluated as 200 centipawns better than the 2nd best move, 220 centipawns better than 3rd best etc.
And let’s say the best move in position B is only 20 centipawns better than the 2nd best, and 27 centipawns better than 3rd best and so on.
You could say that position A would tend to be considered simpler than position B.
Now, a player choosing anything other than the best move in position A (the simpler position) will see his score for “accuracy of play” take a hit of at least 180 centipawns. Whereas a player choosing for example the 2nd or 3rd best moves in position B (a more complex one) will only see a substraction of 20 or 27 centipawns.
What I see is that the adjustment for complexity, in this sense, is already in-built in the raw evaluation system.
Sullivan seems to define (or at least calculate) complexity as the difference between the evaluation of the “best move” and the evaluation of <that same move> right before it became the best move (at whatever iteration this happens, it appears).
This already seems arbitrary enough. Then you have the business of translating whatever “basic complexity” you calculate by this method into an adjustment of the deviation between the best move and whatever the player chose.
So my problem with the complexity notion remains the same. Its definition is slippery. Its quantification is whimsical. So adjustments for it are intolerably arbitrary by necessity. And (in at least some sense) complexity adjustments are already inbuilt in the raw evaluation as I've explained above.
What these kinds of games lead to in one way or another is the old Von Neumann admonition about the dangers of free parameters. As quoted by Antonino Zichichi:
<Von Neumann was always warning his young collaborators about the use of these free parameters by saying: “If you allow me four free parameters I can build a mathematical model that describes exactly everything that an elephant can do. If you allow me a fifth free parameter, the model I build will forecast that the elephant will fly.">
|May-15-13|| ||nimh: <Capabal>
As I have observed, the average error increases with the difference between the best and the second best move, which means it's the other way around than you insist.
<What I see is that the adjustment for complexity, in this sense, is already in-built in the raw evaluation system.>
I'm not sure what you mean by 'built in', but basically you are right, certain aspects of difficulty of positions can be derived from evaluations of moves. However, it doesn't exempt us from measuring them and their effect on the accuracy of play, and using the data to estimate the hypothetical level of accuracy in positions with average difficulty.
<This already seems arbitrary enough.>
If the method Sullivan used were abitrary, then there shouldn't be any correlation to the accuracy of play. Please pay attention to the graph:
<What these kinds of games lead to in one way or another is the old Von Neumann admonition about the dangers of free parameters. As quoted by Antonino Zichichi:>
This is funny; you accuse complexity measurements of being abitrary, but at the same time you make an abitrary comparison to free parameters. :)
|May-15-13|| ||Capabal: Because the magnitude of the error is measured by the magnitude of the distance between best move and other moves. So even if there are fewer errors in positions where the best move is far ahead of all others, when these errors occur they weigh more. So it’s not really surprising that the average error increases that way. It simply shows that the adjustment for the complexity of the position is already in-built in the error measurements.|
It is comparable to a free parameter because:
1. The way the complexity is quantified is necessarily arbitrary (although this is also true of of the way the raw error is quantified).
But especially because:
2. The amount of adjustment applied once the complexity is quantified is even more arbitrary.
3. The adjustment seems unnecessary because the raw error measurement already takes into account the complexity, in the sense that in unclear positions where there is a small distance between “best move” and several other moves, the error that is measured will be much lighter than in those where the distance between best move and other moves is greater.
The more adjusting parameters you introduce, the better your chances of losing sight of what you are trying to measure.
|May-15-13|| ||nimh: What does 'in-built in' even mean in the context we're arguing in? And how does it follow that it's not necessary to take that factor into account when trying to determine one's accuracy of play in a hypothetical scenario where the positions were of average difficulty?|
< Earlier Kibitzing · PAGE 5 OF 5 ·