< Earlier Kibitzing · PAGE 6 OF 6 ·
|May-30-15|| ||latvalatvian: 4 should read "what it's like to be excited"|
|May-30-15|| ||latvalatvian: A human learns chess but a computer is programmed. If your playing someone or something that hasn't learned chess, you are not playing chess.|
|May-30-15|| ||Sally Simpson: Hi latvalatvian,
If I may add:
7. Have no knowledge of previous wins/defeats/draws against an opponent (no desire for revenge.)
8. Cannot be put off by OTB facial gestures etc... or noise.
9. Cannot swindle, knowingly play a bad move in a lost position hoping for a blunder. Humans win this way everyday.
10. They don't even know they are playing a game.
Computers have their part to play in chess but this is overrated. Constantly playing them will lose you the art of playing v a human and (unless you set the position up on a full sized board) it will destroy your full board vision.
|May-30-15|| ||alexmagnus: <latvalatvian> What about self-learning programs? :D|
Or a human who has no idea of the things you mentioned but good at chess?
|May-30-15|| ||alexmagnus: I once wrote a self-learning program which learned to play quite decent Tetris. And that knowing just the very basics of learning algorithms.|
|May-30-15|| ||OhioChessFan: I am sure I'm not the only person who's been surprised how strong the engines have gotten. An opening book, endgame database, decent evaluation algorithm, pruning to increase depth, and the tactical chops to destroy any mistake are all it takes to be better than the best human players. A bit depressing, really, but there you go.|
|May-30-15|| ||alexmagnus: <OCF> Moder engines would probably not even need opening book and endgame TB.|
|May-30-15|| ||Penguincw: Analysis of this game: https://www.youtube.com/watch?v=3yB....|
|May-30-15|| ||latvalatvian: Oh, computers can't teach chess either. I have asked my computer lots of questions and it has no answer. Some programs pretend to be interactive but its a scam.|
|May-30-15|| ||latvalatvian: If a computer defeats me, I couldn't care less. What do computers know about the divine love that surrounds all chess knowledge.|
|May-30-15|| ||latvalatvian: I would rather ask my dog about the pros and cons of the king's gambit than Fritz, and I believe my dog would have a better answer.|
|May-30-15|| ||latvalatvian: I would rather memorize the phone book than look at the boring and lifeless variations of a computer.|
|May-31-15|| ||dumbgai: By <latvalatvian>'s logic, computers can't do anything, including compute. In order for a computer to compute they would have to know:|
1. What computing is.
2. What the numbers they're computing looked like.
3. What it felt like to compute.
4. What it felt like to be excited about computing.
|May-31-15|| ||latvalatvian: Computer's don't compute then. They were programmed to compute and did not learn to compute. So no, they don't compute.|
|May-31-15|| ||latvalatvian: Someone computed a computer and then exclaimed, "Look it has a soul!" It's like blowing on a feather and claiming it's alive when it moves.|
|May-31-15|| ||AylerKupp: It looks like we have a new troll in our midst. Or perhaps an old troll with a new name. Oh well, we all know what to do even if we don't always do it.|
|Jun-02-15|| ||AylerKupp: <Alan Vera> I think that Komodo 9 would win easily. A lot would depend on the hardware that Komodo 9 was running on. Deep Blue was capable 200 million positions per second using 30 nodes each containing a 120 MHz processor, and assisted by 480 special-purpose chips that supported position evaluation (http://en.wikipedia.org/wiki/Deep_B...). On, say, an 8-core, 3 GHz system (and much faster systems are readily available and affordable), it is doubtful that Komodo 9 could evaluate anywhere near 200 million positions per second. To give you an idea, on my antiquated 4-core, 32-bit, 2.66 GHz Intel system in the test described below Komodo 9 was evaluating only about 1.8 million positions per second.|
But today's chess software is much, much better than the chess software of 1997. Modern chess engines' evaluation functions are implemented in software, not hardware, which makes them much more easy to change. And they have been tuned in tens of thousands of engine vs. engine games, making them much more accurate.
And not only is Komodo's position evaluation more accurate, but its searching algorithms and search tree pruning heuristics are much more effective, eliminating from comparison almost all of the positions that are not worth evaluating. For example, Deep Blue relied primarily on brute force, and it would typically search to a depth of only 6 to 8 ply, although in some positions (presumably greatly simplified endgame positions), it could search about 20 plies deep (see link above). So even if the computer that Komodo ran under was only capable of evaluating 2 million nodes per second it could search much deeper, and correctly evaluate the results of moves much better than Deep Blue.
So I decided to put it to a test. I had Komodo 9 evaluate the following position that arose recently in The World vs Naiditsch, 2014 after 13...d5:
click for larger view
At a time control of 40 moves in 2 hours that is an average of 3 minutes per move. In my antiquated system Komodo 9 was able to search to a depth of 23 ply in a little over 2 minutes. Consider what the results would be if in a chess game you were able to calculate 8 half-moves ahead and your opponent was able to calculate 23 half-moves ahead and evaluate the positions with better accuracy! You would probably have no chance.
So, if Deep Blue were to play a 6-game match against Komodo 9 at classical time controls of 40 moves in 2 hours, running on my antiquated computer, I would guess that the final result would be 5-1 or 5˝-˝ in favor of Komodo, and that would be giving Deep Blue the benefit of the doubt.
And these results are not unique to Komodo 9. I had Houdini 4 and Stockfish 6 analyze the same position and in about 2 ˝ minutes Houdini was also able to search to a depth of 23 ply and Stockfish, the "deep searching" champion, was able to reach a depth of 29 ply in less than 3 minutes. Houdini was evaluating about 4.9 million positions per second and Stockfish was evaluating about 2.9 million positions per second. So a match between Deep Blue and Houdini 4 or Stockfish 6 would probably have similar results.
|Jun-03-15|| ||dumbgai: <AylerKupp> I agree with your assessment, but it should be noted that Deep Blue had much of its search and evaluation done using its custom-built hardware, which greatly improved computational speed. In this respect, Deep Blue would have an advantage.|
Still, I would wager my money on a moderately fast machine that was smart (Komodo/Stockfish/Houdini), over a super-fast machine that was much less smart (Deep Blue).
|Jun-06-15|| ||jdc2: <AylerKupp> On a 2.1Ghz laptop with 3072MB hash,
SF 020615 gets to 24 ply in 14.4 seconds for that position.|
|Jun-07-15|| ||AylerKupp: <<dumbai> it should be noted that Deep Blue had much of its search and evaluation done using its custom-built hardware, which greatly improved computational speed.>|
Yes, I know. It was much more efficient, probably by a factor of at least 10X, in evaluating a position. But its search tree pruning heuristics were probably primitive, if it had any at all (besides alpha-beta pruning, which I consider an algorithm and not a heuristic), so it HAD to evaluate a lot more positions. After all, if you don't have to evaluate any of the positions in a search tree branch that was pruned, the time for each position evaluation in that branch is zero, and you can't get more efficient than that!
That's probably why Deep Blue could typically only get to search plies 6 or 8, although in presumably much simplified positions it could get to 20. So it evaluated a lot of branches that it didn’t need to evaluate. And, of course, Deep Blue didn't have endggame tablebases available.
|Jun-07-15|| ||AylerKupp: <jdc2> You clearly have a 64-bit laptop with lots of memory. I only have 4 GB on my 32-bit desktop (of which about 3.3 GBs are usable) and use a 1024 hash table. On that machine Stockfish 6 reached a 24-ply search depth in 31 seconds.|
After I wrote my response to <Alan Vera> I got curious as to how much engines' search tree pruning had improved in recent years. I think that the results, while perhaps interesting, are perhaps off-topic for this page so I have posted the details in AylerKupp chessforum (kibitz #1391) for those who are interested.
|Jun-08-15|| ||andrewjsacks: Perfect style in which to play such a beast. Should have been done in more games by Garry, Kramnik, et al.|
|Dec-23-15|| ||yurikvelo: http://pastebin.com/LS8hEbjx
this game multiPV
What Kasparov attributed as "superior intelligence" is resolved to M19-blunder by SF
|Dec-23-15|| ||yurikvelo: <AylerKupp> nothing depend on hardware.|
Komodo or Stockfish will easily smash DB-2 at 20 kn/sec, i.e. Komodo playing on low-end smartphone (single core ARM-A7) at blitz TC (1' + 2") will smash DB2 at 120' + 30"
|Sep-04-16|| ||SChesshevsky: Related to the horizon difference for Deep Blue 97 vs today's program, I'd be interested to see how today's program assesses the position from 25. Ne3 and goes from there. |
It felt like Deep Blue's repositioning 25...Be7 to ...Bg5 maybe wasn't best.
With a horizon at say 8 moves, Deep Blue must figure it's better or at least equal to the assessment at 25. Ne3 when it gets to 32. g6, right? Maybe based on winning the exchange, but maybe overstressing material and underestimating the passed pawn.
I've seen another computer game that also seemed to overstress material to a passed pawn in the endgame:
Ponomariov vs Fritz, 2005
< Earlier Kibitzing · PAGE 6 OF 6 ·