< Earlier Kibitzing · PAGE 23 OF 39 ·
Later Kibitzing> |
Apr-10-19
 | | AylerKupp: <<scholes> Hardware used in cccc.> Thanks. But where did you get that information from? I've been trying to unsuccessfully find it since the chess.com site doesn't seem (to me) to offer a way to find it. I'm surprised at the similar cost of the CPU and GPU hardware. I think that the CPU cost is misleading if Leela Chess Zero uses only 2 threads (cores?), so it would run just as efficiently on a 2-core machine. I also read somewhere (and I can't remember where and I can't find it) that for the 2019 CCCC Leela Chess Zero was able to use 4 nVidea RTX 2080 ti GPUs. So there's an inconsistency there. And Stockfish does have scaling "problems" because the current version (Stockfish 10) is limited to 512 cores and Stockfish 8 (the version used in the 2017-2018 AlphaZero matches was limited to 128 cores. But of course in practice it did not come anywhere to those limits. I'm not sure what's inherent in the program's architecture to limit it to any reasonable number other than the amount of physical memory available (to prevent disk thrashing) or the OS limiting it. On this page https://www.geeksforgeeks.org/maxim... the number of threads that can be created within a C process in ubuntu Linux distribution is indicated to be 32,754 so there shouldn't be much limitation as far as the OS is concerned when using the CentOS Linux distribution. Of course a server with 32,768 would be "somewhat" expensive. And imagine the electrical bill! And why would the SSD be used with a RAID 1 configuration? RAID 1 does not increase performance in the least, it's used to mirror the data for reliability and I would think that it would degrade its performance somewhat. I don't know what the SSD would be used for other than holding the Syzygy *.rtbz files since, given that they provide win, loss, draw information needed for the MCTS playouts, would be clearly sufficient. And, since they don't change they would need to be mirrored. I'll calculate and post the computational processing advantage that Leela Chess Zero had over Stockfish 10 when I get a chance. |
|
Apr-10-19
 | | AylerKupp: <<john barleycorn> I recommend a good Chablis to wash the slow brain cells away and strenghten the remaining ones.> I'll drink to that! |
|
Apr-10-19 | | MrMelad: <AylerKupp> this has got to be one of the weirdest drivels I’ve ever read. I haven’t decided yet if it make this discussion interesting or just terribly ridiculous, but I have to admit I loled a little bit reading through your last couple of posts. You seem to have all the answers but yet, somehow, you keep reaching the wrong conclusions. I’m quite baffled to tell you the truth as to how you can use Amdahl’s law to support your argument as it is pretty much states word for word what I’m trying to convey to you over the past month. And the dog analogy was a joke designed to ridicule your arguments, to see you seriously accept it is hilarious. It means I hit the nail right in its head. Are you seriously comparing neurons in the human brain to silicon based semiconductor logical gates? |
|
Apr-10-19 | | nok: <AK: If they can be added in the early stages then a substantial schedule improvement could be achieved although is again never linear because the increased amount of communication required between all the software engineers. But if you add this number later in the project the amount of time needed to complete it could very likely increase because a portion of the time of the existing project personnel needs to be diverted to bringing the new personnel up to speed. And the later than the additional personnel are added the worse the situation becomes since the ratio of the amount of time needed to bring the added personnel up to speed divided by the amount of effort remaining to complete the project is greater.> You should write a book.
(if you didn't actually write the Bible) |
|
Apr-10-19
 | | AylerKupp: <<MrMelad> this has got to be one of the weirdest drivels I’ve ever read. I haven’t decided yet if it make this discussion interesting or just terribly ridiculous, but I have to admit I loled a little bit reading through your last couple of posts.> Well, I'm glad that I was able to LOL you if only just a little bit. And sometimes the most ridiculous discussions are the most interesting, so ... <You seem to have all the answers but yet, somehow, you keep reaching the wrong conclusions.> I never claimed to have <all> the answers but I think that I have <some> of the answers. Now all we have to consider, as <john barleycorn> pointed out, "What was the question?> <I’m quite baffled to tell you the truth as to how you can use Amdahl’s law to support your argument as it is pretty much states word for word what I’m trying to convey to you over the past month.> Great, it seems that we have reached complete agreement since each one of us claim to state word for word what we're trying to convey to each other. So now maybe we can stop this discussion (drivel?) since there's not much to be gained from it if we both agree 100%. I think that the point of diminishing returns has been more than reached. <And the dog analogy was a joke designed to ridicule your arguments, to see you seriously accept it is hilarious. It means I hit the nail right in its head. Are you seriously comparing neurons in the human brain to silicon based semiconductor logical gates?> It's hard to tell your jokes from your serious statements since the both sound pretty much the same. As far as "seriously" comparing neurons in the human brain to silicon based logic gates you were the first one to do that, so I guess you'll have to tell me whether you were serious or joking. |
|
Apr-10-19
 | | AylerKupp: <nok> You should write a book. (if you didn't actually write the Bible). Yes, I thought about writing a book about software project management, my experiences in the field, and describing methods that might help others. But shortly after I retired and had the time to do it I discovered this site and decided to take a mental break from software project management. Going on 6 years since that time and I'm still on a mental break. And before I get back to that project I have a cookbook for absolute beginners in mind that I've actually started writing. I intend to include a chapter on showing others how to deal with the parallel activities of preparing for and cooking a multi-course dinner using the Universal Modeling Language (UML) in a manner similar to designing a multiprocessor system where multiple processes and threads executing in different processors need to communicate to exchange information in order to get either the work or, in this case the dinner, done. But, no, I didn't actually write the Bible, in case you were wondering. |
|
Apr-11-19 | | MrMelad: <So now maybe we can stop this discussion (drivel?) since there's not much to be gained from it if we both agree 100%. I think that the point of diminishing returns has been more than reached.> Well I'm not trying to convince <you> anymore but I'm trying to offset some of your claims as they seem to focus around diminishing and discrediting the accomplishments of AlphaZero and Leela. For example, your arguments about AlphaZero having x30 or more computational advantage over stockfish ignores the fact that those are different approaches to algorithms in general, and that stockfish can't even use that <type> of computational method that allegedly gives AlphaZero such advantage. You use those arguments to "warn" people from giving too much credit to AlphaZero as if the competition between stockfish and AlphaZero was between two similar algorithms that one simply had a huge computational advantage. I think that's wrong and misleading. AlphaZero employ a novel and different approach to game playing algorithms and it also extends way beyond the scope of chess and even beyond the scope of games in general. It is optimizing the strategy of solving a problem without crafting it explicitly. It is possibly the first algorithm to make successful use of GPUs in chess playing software thus utilizing more computational power for the same price and/or energy consumption. I don't think your intentions are malicious though, I just don't think you understand how AlphaZero works, i.e., how reinforcement learning or deep learning works. And that's okay, most people don't, even the people that invented this stuff aren't really sure how and why it works so well. <It's hard to tell your jokes from your serious statements since the both sound pretty much the same> Yhea, I know sarcasm doesn't always translate well on the internet but I sometimes forget. My apologies for using it, it's not that funny anyways. |
|
Apr-11-19 | | scholes: <AK> these information are available in cccc chat. Leela uses 4 gpu but due to algorithm bottleneck, gpu utilzation remains at around 50%. Stockfish scaling has been tested to 192 cores or 384 threads. It keeps gaining elo almost at similar rate up to that thread count. I am not aware of any tests beyond that thread count. I meant to say that currently maximum possible strength of stockfish remains larger than leela's. |
|
Apr-11-19 | | MrMelad: <AylerKupp> Have a look at this video from Stanford University that explains the difference between CPUs and GPUs in the context of deep learning. https://www.youtube.com/watch?v=6Sl... Here is the part where he says "You can't really compare CPU cores to GPU cores apples to apples" https://www.youtube.com/watch?v=6Sl... I recommend this lecture series as a very good basis for deep learning. |
|
Apr-11-19 | | nok: <cooking a multi-course dinner using the Universal Modeling Language (UML)> Should be fun. |
|
Apr-15-19 | | diceman: <cooking a multi-course dinner using the Universal Modeling Language (UML)> Garbage in garbage out. |
|
Apr-17-19 | | MrMelad: <AylerKupp>
It's hard for me to understand how such statements can coexist: <I am in <NO WAY> trying to discredit the accomplishments of AlphaZero and Leela Chess Zero> and
<there is <nothing> original in the algorithms used in AlphaZero> or <AlphaZero accomplished is the best <integration> of these algorithms> Let's ignore for a moment the fact the the DeepMind team consisted of the same individuals who helped create this very thing called "reinforcement learning" in the first place. This in itself destroys your claim that AlphaZero was not innovative, but it may not be your worst blind spot. Let's also ignore the fact that google acquired DeepMind for 300M dollars and it's not very likely that google would spend such huge amount of money if innovation and patents weren't involved. Obviously google thought the work done in DeepMind <before it was integrated with google's hardware> was worth <a ton> of money. Is "integrating" different ideas into a working scheme not innovation in your eyes? That's preposterous <AylerKupp>. In every field of science recognition is always given to both the person that theorized something and to the person that <proved> this very thing. In computer science it is customary to attribute patents to both the scientist and the implementer. Suggesting AlphaZero is not innovative because it is merely an "integration" is the same as saying: The special theory of relativity was not innovative as the Lorentz transformation already existed and Einstein merely "integrated" known ideas into a working theory. It's like saying his work on the photoelectric effect was not innovative as it merely "integrated" linear equations with some experimental data. It's like saying the new black hole picture is not innovative as it merely "integrated" known practices in tomosynthesis (software) with some new telescopes (hardware). You are wrong about your definition of "innovation" in the first place, so no wonder pointing out the massive evidence for many ingenious innovations in the AlphaZero project is not very effective in this argument. <I am trying to make people aware that the results of the AlphaZero and Stockfish matches are inconclusive at best <IF> what you are trying to find out what's the best approach and algorithms to implement the best chess playing engines> As I and others have pointed out, the best "approach" is subjective and has dependencies. When I design an algorithm, if I know in advance that a GPU is available I will most likely take a different approach for the design than if only a CPU is available. Both designs are valid approaches, any attempt to disconnect them from the hardware and implementation to compare "approaches" is irrelevant and pointless. Again our definitions must not agree, "approach" in your world makes no sense to me. |
|
Apr-17-19 | | MrMelad: <AylerKupp: Suppose there a 100-game match was held between AlphaZero and Leela Chess Zero, both of which have similar (though not identical) algorithms and architectures and, of course, likely different implementations of those algorithms. ...
If results of the match were a near tie (like the previous TCEC's Leela Chess Zero vs. Stockfish Superfinal, even though Leela Chess Zero had a substantial computing capability over Stockfish), then I would conclude that the performance of the algorithms in AlphaZero and Leela Chess Zero was also approximately equal.> As I and scholes already showed you, as a reinforcement/deep learning algorithms, both Leela and AlphaZero strength is also dependent on their training data set size. It is not something you can ignore. The result of your hypothetical match doesn't prove anything regarding "approach" to software if it depends on the time and hardware you let it train on. Leela might employ a better network architecture and better search or filter algorithms and so a better "approach" but would still lose because it didn't train nearly as much as AlphaZero. Of course if both trained for the same time on the same hardware and were benchmarked with the same hardware one can compare their performance and determine the best "approach". This would be comparing "apples to apples" as we benchmarked with the same hardware two different algorithms, something which is very very far from your attempts to: 1. Use an imaginary general "computational capability" factor between GPUs and CPUs (which I've shown multiple times already is illogical) 2. Comparing two different algorithms with two different hardwares (also illogical). <What he actually said starting at ~ 00:07:00 was that CPUs and GPUs can't be compared because "they are <qualitatively different>" and the much larger number of available cores in the GPUs means that they can effectively perform a larger number of more limited operations in parallel (no surprise here!). So <CPU cores> cannot be compared to <GPU cores> <because of the kind of things that they can each do most effectively>. That's where the "apples to apples" comment was addressing, not in the context that their computational capabilities can't be compared.> This is a sophistic argument <AylerKupp>. What he (a Stanford University lecturer in an important course) said was that GPUs and CPUs can not be compared <period>. You are just reaching here, and it's a bit annoying. Just admit you disagree with his opinion, don't put different meaning to his words when specifically bothered to mention otherwise. |
|
Apr-17-19 | | MrMelad: <AylerKupp>
Imagine I've harnessed all the power in the super massive black hole in the center of our galaxy to only perform CPU operations. Let's say I was very good at it and as a result I've managed to build a computer that can calculate 10^120 moves per second (the assumed complexity tree size of chess) and store 10^40 KB in memory (the assumed number of possible positions in chess). Now my algorithm does not need to "evaluate" positions, I can simply calculate to the end from the starting position in a second and create a 32 pieces table base. Isn't that a great approach? Is it valid out of the context of the available hardware? Which is the better approach, this approach or Stockfish or AlphaZero? Obviously if I have a slightly lesser of a computer, for example just a poor amount of 10^100 operations per second and 10^30 KB storage space, this approach would be useless and Stockfish or AlphaZero would be better approaches. How can you disconnect the available hardware from the algorithm design? Let's take another analogy. What if I only had 4 KB of memory and 1Hz CPU. In this case my approach would just be counting the number of pieces and scoring the position evaluation based solely on material. Is this a better approach than Stockfish or AlphaZero? In this hardware limit scenario Stockfish and AlphaZero would run out of memory and lose on time every single game, isn't my approach just better? No, the "best approach" depends on the available hardware. Different hardware makes different approaches efficient or inefficient. |
|
Apr-17-19 | | scholes: leela wins ccc season 7
https://www.chess.com/news/view/lc0... |
|
Apr-17-19 | | devere: Unlike Google's Alphazero publicity stunt, this seems like significant accomplishment on a level playing field: https://www.chess.com/news/view/lc0... |
|
Apr-17-19 | | MrMelad: <Lc0 Wins Computer Chess Championship, Makes History> Ha, that's nothing. The real history making would be when CG actually open up a page for Leela. <devre: Unlike Google's Alphazero publicity stunt, this seems like significant accomplishment on a level playing field> If Leela was inspired by AlphaZero and based on it, wouldn't that make AlphaZero more than a just publicity stunt? Wouldn't it make it at least an... Inspiration? (for example) |
|
Apr-17-19
 | | AylerKupp: <<MrMelad> It's hard for me to understand how such statements can coexist> And I find it hard to understand why you find it hard that such statements can't coexist. My second statement you quoted mentioned <algorithms> and my third statement you quoted mentioned <integration>. How do they relate to each other? You can fail to be original in developing algorithms and still do the best job in integrating them to form a superior product. <Let's ignore for a moment the fact the DeepMind team consisted of the same individuals who helped create this very thing called "reinforcement learning" in the first place.> I'm surprised that you say that. A simple web search would have shown you multiple references to articles and books on reinforcement training dating back to at least the early 2000s. And in this post, AlphaZero (Computer) (kibitz #412), I listed various concepts <applied to chess engines> that greatly preceded AlphaZero's use of them; application of neural networks to chess engines and use of reinforcement training to train chess engines neural networks, to name just two. And even if all the authors of all these articles on reinforcement training were members of the AlphaZero team that doesn't mean that reinforcement training would have been original <to the development of AlphaZero>. It would simply mean that the developers used their prior knowledge when developing AlphaZero. And who doesn't use their prior knowledge, possibly of original concepts, in developing a new item? That doesn't mean that the item was original. <Let's also ignore the fact that google acquired DeepMind for 300M dollars and it's not very likely that google would spend such huge amount of money if innovation and patents weren't involved. Obviously google thought the work done in DeepMind <before it was integrated with google's hardware> was worth <a ton> of money> Yes, let's definitely also ignore it. If it didn't have anything to do with AlphaZero, what does it then have to do with AlphaZero's originality? And what difference does the amount of money that Google/DeepMind spend in developing some of the concepts used in AlphaZero? Is there a threshold where, after you spend a sufficient amount of money, the work you do becomes "original" even though it was described in published articles and books earlier? The only thing that would be original would be the amount of money spent. <Is "integrating" different ideas into a working scheme not innovation in your eyes? That's preposterous > It would be innovation if there were new concepts developed and used in the integration of AlphaZero components (hardware and software). But I don't recall hearing or reading that any new and innovative integration approaches were used. So if you take concepts that have been previously applied to the same domain and integrate them in the "usual way", then that's not innovation in the <integration> effort as far as I'm concerned. But maybe your definition and approach to "integration" is different than mine. <Suggesting AlphaZero is not innovative because it is merely an "integration" is the same as saying ..."> Really? Is that the depth that you have to resort to in order to justify your opinions? |
|
Apr-17-19
 | | AylerKupp: <<MrMelad> <You are wrong about your definition of "innovation" in the first place, so no wonder pointing out the massive evidence for many ingenious innovations in the AlphaZero project is not very effective in this argument.> I said that there were many likely innovations in the <implementation> of the <concepts> behind the development of AlphaZero, but not the concepts themselves. So to say, as Matthew Sadler said in "Game Changer", that AlphaZero uses a "new approach to machine self-learning in chess" is, to use your apparently favorite word, "nonsense". And if my definition of "innovation" is different than yours, fine, but that doesn't mean that my definition is necessarily wrong, and since I never defined what I meant by "innovation", I don't see how you can conclude that it is wrong. And it is perhaps noteworthy that nowhere in their 2 published papers did the DeepMind team used the words "innovation", "innovative", or any of their derivatives. <As I and others have pointed out, the best "approach" is subjective and has dependencies. When I design an algorithm, if I know in advance that a GPU is available I will most likely take a different approach for the design than if only a CPU is available. Both designs are valid approaches, any attempt to disconnect them from the hardware and implementation to compare "approaches" is irrelevant and pointless.> In that case I'm sorry that you and others missed the fundamental concept of separating hardware from software as much as possible so that enhancements in one do not entirely negate the effort previously spend on the other. <As I and scholes already showed you, as a reinforcement/deep learning algorithms, both Leela and AlphaZero strength is also dependent on their training data set size. It is not something you can ignore.> I didn't ignore it, I just included the training data set as part of the software. Since we (or at least I) was talking about just two general items, hardware and software, the training data set seemed to be better considered part of the later rather than the former. <Leela might employ a better network architecture and better search or filter algorithms and so a better "approach" but would still lose because it didn't train nearly as much as AlphaZero. > Leela Chess Zero might very well lose a 100-game match to AlphaZero even if they were both running on hardware with similar computational performance capability, but not necessarily because it didn't train nearly as much as AlphaZero. If you have any data that compares the relative chess playing improvements as a function of both algorithm improvements and learning quantity/quality, please provide it. Otherwise it's just conjecture on your part. <1. Use an imaginary general "computational capability" factor between GPUs and CPUs (which I've shown multiple times already is illogical)> You have <claimed> that it illogical but your claim that is illogical is just that, a claim. It doesn't make it true no matter how many times you say it. Comparing the number of operations per unit time that a computer can perform is hardly "imaginary", it's fundamental. Computers have been compared on the basis of flops for a long time regardless of the differences in their architectures. Yes, it's not very accurate but it's the best way we seem to have given the differences in architecture between CPUs, GPUs, and TPUs. And at least it gives us an idea, and any idea is better that just claiming that it can't be done because that seems to suit your arguments. I have had first time experience with people who have claimed that something can't be done because they don't know how to do it, even after I've showed them how. |
|
Apr-17-19
 | | AylerKupp: <<MrMelad> <What he (a Stanford University lecturer in an important course) said was that GPUs and CPUs can not be compared <period>.> Watch the video again and see what his reference to apples and apples applies to in the context (which seems to be another one of your favorite words) of what he was saying rather then the few words that immediately preceded it. In other words, try to <understand> what he was saying starting at about 00:07:00 in the video. If my correcting you is annoying to you, that's your problem and not mine, and you probably should not continue with this discussion. <Imagine I've harnessed all the power in the super massive black hole in the center of our galaxy to only perform CPU operations> Now, just who is the one reaching here?
<Let's take another analogy. What if I only had 4 KB of memory and 1Hz CPU. In this case my approach would just be counting the number of pieces and scoring the position evaluation based solely on material. Is this a better approach than Stockfish or AlphaZero?> No, and it's been done before with an engine called Tech in the early 1970s (see https://www.chessprogramming.org/Tech). It used material as its only evaluation criteria, just brute calculation force with no forward pruning. But it's author was realistic in his assessment of this approach; his objective was not to develop a better chess engine but to establish a benchmark by which computer technology advances (i.e. computational capability) could be tracked. That way the playing strength of Tech would increase solely as a function of computing capability improvement, nothing more, and it could be compared with improvements in software chess engine technology. (alpha-beta pruning, search tree pruning heuristics, use of opening books and tablebases, etc.) So the use of an evaluation function using material only was recognized as inadequate even in the early 1970s. Why are you bothering to bring it up again in an attempt to support your position? Look, this subject seems to have struck an emotional chord with you. I prefer to use facts and data to support my point of view rather than come up with arbitrary and sometimes far reaching analogies. And I learned a long time ago that to use facts and data to convince someone who uses emotional arguments to support their position is fruitless. So if you want to claim "victory" (whatever that means) in this discussion, that's fine with me. But you are certainly not going to convince me with the types of emotional arguments you've been providing, and I'm not going to waste more time and effort trying to educate you by providing you with additional facts and data. |
|
Apr-17-19
 | | AylerKupp: <<MrMelad> If Leela was inspired by AlphaZero and based on it, wouldn't that make AlphaZero more than a just publicity stunt? Wouldn't it make it at least an... Inspiration? (for example)> Yes, definitely an inspiration, and providing substantial publicity in this field to inspire and encourage others to devote additional time and effort to make further improvements in the field are to be commended. But somehow I doubt that inspiring others was a primary motivation for Google and the DeepMind team. I suspect that they were more focused (and I can't blame them for that in any way; they are, after all, in business to at least eventually make money) on showcasing the capabilities of their proprietary TPUs to support AI operations, and increase the amount of leases and (when they decide to sell it) sales of their TPUs. However, regardless of their primary motivation, if it increases useful advances in technology, I'm all for it. A perhaps infrequent desirable result of applying the law of unintended consequences. And perhaps if LeelaC0 gets enough publicity as a result of this event and others like it might entice DeepMind to agree to an AlphaZero vs. LeelaC0 chess match to determine the "king of the hill". It would be good if the two participants would agree to a hardware configuration of equal computational hardware capability, but I doubt that AlphaZero would agree to that. If it is a question of "bring what you got" hardware-wise, then I wouldn't be surprised if AlphaZero was provided with the support of at least 4 3rd generation TPUs. DeepMind would certainly not want to leave the results of such a match to either chance or (gasp!) merit. |
|
Apr-18-19 | | MrMelad: <AylerKupp: So to say, as Matthew Sadler said in "Game Changer", that AlphaZero uses a "new approach to machine self-learning in chess" is, to use your apparently favorite word, "nonsense"> Apparently I'm in good company, so far my opinions about the originality of the DeepMind team are inline with google's strategical decision making team, GM Mathew Saddler, a Stanford University lecturer on Deep Learning, Goodfellow et al and an Intel lecturer. Reinforcement learning and chess were studied before DeepMind, only that it was significantly less successful and better hardware is far from being the only reason. If, for example, you've watched the video of the Stanford lecture I've sent you, you'd notice another major innovation by DeepMind - they created their very own python framework that is augmenting TensorFlow for creating complex neural networks designs. https://github.com/deepmind/sonnet
But of course, that's probably not "innovation" in your eyes too. I mean, seriously, how can anybody be innovative these days with your definitions? According to your own study, there were roughly 20 attempts for deep learning type chess engines, only few of them were using reinforcement learning per se, without significant success relative to old fashioned engines. Does that mean DeepMind were not original when they: Developed the programming framework used
Significantly refined and improved the NN architecture design Significantly refined and improved the NN architecture implementation Refined the classifiers
Refined the loss function
Refined and suited the stochastic gradient descent to the specific problem Improved on Q learning implementation in general Some of their team members are internationally renowned for their work on reinforcement learning Improved on GPU utilization and energy consumption Presented a clear study case of strategies
Integrated the whole thing to a working construct achieving amazingly strong chess strength, possibly stronger than ever achieved and <countless> other small or big engineering/scientific improvements - those are of course, no innovations because NeuroChess did something in 1995 that resembled slightly the general design. Hey it says "Neural network"! It must be the same! (last two sentences and next one were said in sarcarsm) A smart person once said, and I quote, <Haha LMAO>. |
|
Apr-18-19 | | MrMelad: Maybe one simply have to experience designing and debugging of a neural network that <doesn't work> to appreciate such incredible construct as AlphaZero... |
|
Apr-18-19 | | scholes: In 2014, 60℅ was state of art accuracy in cat dog classification problem. Of course by ayler kupp definition there has been no innovation in last 25 years. If progress in last 5 years continue in future then Leela would beat stockfish playing on same cpu. It already beat it when they both play on gpu. |
|
Apr-18-19
 | | keypusher: Sort of off topic, but I came across the following post from <zarg > written in early 2010: <
Jan-02-10 zarg: <Isbjorn: <zarg: Nevertheless, chess is way too complex to be solved with current computer technology, and to compensate for that, chess engines deploy a number of <cheats> to make them really strong. Instead of always calculating, engines use lookup tables, e.g. in the opening and in the endgame. That's nothing less than cheating, even an idiot can lookup a predefined move and <zero> calculation is required to do that.>
Have you tried playing a modern chess engine with opening book and endgame tables disabled?>No, but I've poked around in the Crafty source code and have a decent understanding of advances in computer HW/SW over the last 20 years. OTOH, I've played a lot against the earlier chess engines. In recent years, same goes for the best bridge engine. With my regular (human) partner.. that engine has little chance, mainly because our bidding system is superior, which is similar to have a far better chess opening book. Without such a devastating advantage, the way to win is to deploy a number of anti-engine strategies, the same goes for chess... However, chess is a "non-cooperative" and "perfect information" game, so it's much easier to program a strong chess engine and thus modern chess engines are far stronger opponent for a human. Besides, a lot of research has gone into chess programming. Hence, a modern chess engine has no problem these days beating a weak chess player like me, never been at GM level there. That say nothing about what a super GM like Carlsen could do against it... <when> the engine has to calculate every move. <In my installation of Crafty 23.0, the opening book doesn't work, but the level of play is still much too strong for me (and probably any one human).> I might be wrong, but my impression from reading a number of posts by the engine experts here, is that removing the opening book is worth at least pawn odds, and against a strong GM like Carlsen, that might simply be close to a lost game. How much damage to engine play would removing the EGTB lookups do?? Again, my impression is that Carlsen would not be destroyed in the endgame then... assuming he drop his anti-human strategies, since they have no affect against a computer and will have a rather high cost. So unlike you, if these <cheats> was disabled and the engines really had to <calculate> every move, my guess is that Crafty (and even stronger engines) on a standard laptop, would still have serious problems against the best humans like Carlsen.> I guess we’ll never know, but it would be interesting to see if a 2010-era engine would lose to a top human GM without an opening book or tablebase. (Maybe something like this was done?) One of the things that most impresses me about AlphaZero and other neural network engines is that they don’t have opening books or tablebases. |
|
 |
 |
< Earlier Kibitzing · PAGE 23 OF 39 ·
Later Kibitzing> |
|
|
|