< Earlier Kibitzing · PAGE 6 OF 6 ·
Later Kibitzing> |
May-30-15 | | latvalatvian: I would rather ask my dog about the pros and cons of the king's gambit than Fritz, and I believe my dog would have a better answer. |
|
May-30-15 | | latvalatvian: I would rather memorize the phone book than look at the boring and lifeless variations of a computer. |
|
May-31-15 | | dumbgai: By <latvalatvian>'s logic, computers can't do anything, including compute. In order for a computer to compute they would have to know: 1. What computing is.
2. What the numbers they're computing looked like.
3. What it felt like to compute.
4. What it felt like to be excited about computing.
etc. |
|
May-31-15 | | latvalatvian: Computer's don't compute then. They were programmed to compute and did not learn to compute. So no, they don't compute. |
|
May-31-15 | | latvalatvian: Someone computed a computer and then exclaimed, "Look it has a soul!" It's like blowing on a feather and claiming it's alive when it moves. |
|
May-31-15
 | | AylerKupp: It looks like we have a new troll in our midst. Or perhaps an old troll with a new name. Oh well, we all know what to do even if we don't always do it. |
|
Jun-02-15
 | | AylerKupp: <Alan Vera> I think that Komodo 9 would win easily. A lot would depend on the hardware that Komodo 9 was running on. Deep Blue was capable 200 million positions per second using 30 nodes each containing a 120 MHz processor, and assisted by 480 special-purpose chips that supported position evaluation (http://en.wikipedia.org/wiki/Deep_B...). On, say, an 8-core, 3 GHz system (and much faster systems are readily available and affordable), it is doubtful that Komodo 9 could evaluate anywhere near 200 million positions per second. To give you an idea, on my antiquated 4-core, 32-bit, 2.66 GHz Intel system in the test described below Komodo 9 was evaluating only about 1.8 million positions per second. But today's chess software is much, much better than the chess software of 1997. Modern chess engines' evaluation functions are implemented in software, not hardware, which makes them much more easy to change. And they have been tuned in tens of thousands of engine vs. engine games, making them much more accurate. And not only is Komodo's position evaluation more accurate, but its searching algorithms and search tree pruning heuristics are much more effective, eliminating from comparison almost all of the positions that are not worth evaluating. For example, Deep Blue relied primarily on brute force, and it would typically search to a depth of only 6 to 8 ply, although in some positions (presumably greatly simplified endgame positions), it could search about 20 plies deep (see link above). So even if the computer that Komodo ran under was only capable of evaluating 2 million nodes per second it could search much deeper, and correctly evaluate the results of moves much better than Deep Blue. So I decided to put it to a test. I had Komodo 9 evaluate the following position that arose recently in The World vs Naiditsch, 2014 after 13...d5:
 click for larger viewAt a time control of 40 moves in 2 hours that is an average of 3 minutes per move. In my antiquated system Komodo 9 was able to search to a depth of 23 ply in a little over 2 minutes. Consider what the results would be if in a chess game you were able to calculate 8 half-moves ahead and your opponent was able to calculate 23 half-moves ahead and evaluate the positions with better accuracy! You would probably have no chance. So, if Deep Blue were to play a 6-game match against Komodo 9 at classical time controls of 40 moves in 2 hours, running on my antiquated computer, I would guess that the final result would be 5-1 or 5½-½ in favor of Komodo, and that would be giving Deep Blue the benefit of the doubt. And these results are not unique to Komodo 9. I had Houdini 4 and Stockfish 6 analyze the same position and in about 2 ½ minutes Houdini was also able to search to a depth of 23 ply and Stockfish, the "deep searching" champion, was able to reach a depth of 29 ply in less than 3 minutes. Houdini was evaluating about 4.9 million positions per second and Stockfish was evaluating about 2.9 million positions per second. So a match between Deep Blue and Houdini 4 or Stockfish 6 would probably have similar results. |
|
Jun-03-15 | | dumbgai: <AylerKupp> I agree with your assessment, but it should be noted that Deep Blue had much of its search and evaluation done using its custom-built hardware, which greatly improved computational speed. In this respect, Deep Blue would have an advantage. Still, I would wager my money on a moderately fast machine that was smart (Komodo/Stockfish/Houdini), over a super-fast machine that was much less smart (Deep Blue). |
|
Jun-06-15 | | jdc2: <AylerKupp> On a 2.1Ghz laptop with 3072MB hash,
SF 020615 gets to 24 ply in 14.4 seconds for that position. |
|
Jun-07-15
 | | AylerKupp: <<dumbai> it should be noted that Deep Blue had much of its search and evaluation done using its custom-built hardware, which greatly improved computational speed.> Yes, I know. It was much more efficient, probably by a factor of at least 10X, in evaluating a position. But its search tree pruning heuristics were probably primitive, if it had any at all (besides alpha-beta pruning, which I consider an algorithm and not a heuristic), so it HAD to evaluate a lot more positions. After all, if you don't have to evaluate any of the positions in a search tree branch that was pruned, the time for each position evaluation in that branch is zero, and you can't get more efficient than that! That's probably why Deep Blue could typically only get to search plies 6 or 8, although in presumably much simplified positions it could get to 20. So it evaluated a lot of branches that it didn’t need to evaluate. And, of course, Deep Blue didn't have endggame tablebases available. |
|
Jun-07-15
 | | AylerKupp: <jdc2> You clearly have a 64-bit laptop with lots of memory. I only have 4 GB on my 32-bit desktop (of which about 3.3 GBs are usable) and use a 1024 hash table. On that machine Stockfish 6 reached a 24-ply search depth in 31 seconds. After I wrote my response to <Alan Vera> I got curious as to how much engines' search tree pruning had improved in recent years. I think that the results, while perhaps interesting, are perhaps off-topic for this page so I have posted the details in AylerKupp chessforum (kibitz #1391) for those who are interested. |
|
Jun-08-15 | | andrewjsacks: Perfect style in which to play such a beast. Should have been done in more games by Garry, Kramnik, et al. |
|
Dec-23-15 | | yurikvelo: http://pastebin.com/LS8hEbjx
this game multiPV
What Kasparov attributed as "superior intelligence" is resolved to M19-blunder by SF |
|
Dec-23-15 | | yurikvelo: <AylerKupp> nothing depend on hardware. Komodo or Stockfish will easily smash DB-2 at 20 kn/sec, i.e. Komodo playing on low-end smartphone (single core ARM-A7) at blitz TC (1' + 2") will smash DB2 at 120' + 30" |
|
Sep-04-16 | | SChesshevsky: Related to the horizon difference for Deep Blue 97 vs today's program, I'd be interested to see how today's program assesses the position from 25. Ne3 and goes from there. It felt like Deep Blue's repositioning 25...Be7 to ...Bg5 maybe wasn't best. With a horizon at say 8 moves, Deep Blue must figure it's better or at least equal to the assessment at 25. Ne3 when it gets to 32. g6, right? Maybe based on winning the exchange, but maybe overstressing material and underestimating the passed pawn. I've seen another computer game that also seemed to overstress material to a passed pawn in the endgame: Ponomariov vs Fritz, 2005 |
|
Dec-02-17 | | The Rocket: <Related to the horizon difference for Deep Blue 97 vs today's program, I'd be interested to see how today's program assesses the position from 25. Ne3 and goes from there.> Interestingly, Kramniks opponent in man vs machine 2006, Deep Fritz 10, opts for the same dubious g4, f5 line. The current engines however have things worked out and refrain from these impulses to win material. Btw, I am far from certain Kasparov saw all the complications after g4 and that it favoured white. Also, Kasparov is probably slightly worse if Deep Blue just plays 22...Bg6. I think the computer outplayed him until it overreached. |
|
Jan-13-19 | | thegoodanarchist: Today's quote of the day is about this game, I think: <What had inspired Kasparov to commit a mistake? His anxiety over Deep Blue's forty-fourth move in the first game: the move in which the computer had moved its rook for no apparent purpose. Kasparov had concluded that the counterintuitive play must be a sign of superior intelligence. He had never considered that it was simply a bug.> --- Nate Silver |
|
Jan-13-19 | | thegoodanarchist: Since Deep Blue won the match, I was thinking this would make a good GOTD with the title: "A Feature, Not a Bug" |
|
Jan-13-19 | | thegoodanarchist: < latvalatvian: I would rather ask my dog about the pros and cons of the king's gambit than Fritz, and I believe my dog would have a better answer.> Then I won't play chess for money against your dog! |
|
Nov-05-21 | | ismet: you can not win against to any computer chess program if you don't compress program into board |
|
Dec-13-22 | | DouglasGomes: Deep Blue's play was heavily criticized in this game, which in turn lead to comparisons between move quality between game 1 and game 2. However it played decently. 22... g4 (after 22... Bg6, Black is not worse). was a mistake but it led to sharp complications that cannot be resolved by general principles. Trading queens was a strategic mistake, Black must instead keep relying on dynamic resources. 33... Qc7 to threaten ...Nxg6 was better. |
|
Dec-14-22 | | SChesshevsky: <Deep Blue's play was heavily criticized...> Agree that it seemed to play decently. So maybe too much criticism unfair. Even 22...g4 is probably explainable. Guessing DB evaluated itself as better. More space, more threats, better B's. Going for activity in such cases is not unusual. Think the problem was that it totally missed value of the resulting protected passed pawn on the sixth. Believe DB thought it was better materially, as evidenced by it trading down with Q exchange. Unfortunately, rule of thumb that protected passed pawn on 6th usually worth a piece. Which appears to actually put DB down material. Figuring this might've just been a programming issue or oversight. Likely spotted and easily corrected. Computer calculation was probably accurate but maybe bad evaluation algorithm just spit out a bad number. |
|
Dec-14-22
 | | HeMateMe: Kasparov was himself beaten in a WC game by Anand when VA sac'd the exchange, Rook for minor, which gave vishy connected passers on the 4th rank. Kaspy had to resign when the two pawns got a head of steam. |
|
Dec-14-22 | | SChesshevsky: <HeMateMe> An excellent spot! |
|
Jun-22-23 | | Whitehat1963: How would this version of Deep Blue fare against AlphaZero? Could it get even one draw in 100 games? More importantly, how would Carlsen do against AlphaZero? Could he win one? Get a draw or two? In a thousand games? |
|
 |
 |
< Earlier Kibitzing · PAGE 6 OF 6 ·
Later Kibitzing> |