< Earlier Kibitzing · PAGE 39 OF 39 ·
Later Kibitzing> |
Nov-18-21 | | MrMelad: <Albertan> Thank you for the link, very interesting |
|
Nov-20-21 | | Albertan: How AlphaZero learns Chess:
https://www.chess.com/news/view/how... |
|
Dec-13-21 | | Albertan: DeepMind makes bet on AI system that can play poker,chess,Go, and more: https://venturebeat.com/2021/12/08/... |
|
Dec-17-21 | | Albertan: Understanding AlphaZero’s Neural Network’s SupeHuman Chess Ability: https://www.marktechpost.com/2021/1... |
|
Jan-08-22 | | Albertan: How The AI Révolution Impacted Chess:part 1:
https://en.chessbase.com/post/how-t... |
|
Jan-14-22 | | Albertan: How the AI Revolution impacted Chess Part II:
https://en.chessbase.com/post/how-t... |
|
Feb-07-22 | | Albertan: Reimagining Chess With AlphaZero:
https://www.youtube.com/watch?v=M6r... |
|
Nov-09-22
 | | al wazir: AlphaZero plays chess at a (literally) superhuman level. Unlike earlier chess-playing applications, it wasn't programmed by humans, nor was it trained by exposing it to a myriad of top-level GM games. Instead, it *taught itself* to play. What I would like to know is, have neural networks been employed to generate chess *problems*. I have in mind two-movers, three-movers, etc., and endgame studies. For me the difference between an easy problem and a hard one shows up in the time it takes to solve it. For easy problems I can find the key in a few minutes. For hard ones it takes hours; sometimes I give up before finding it. It seems to me that, mutatis mutandis, a neural network could create chess problems of diabolical complexity -- problems that would take even the best solvers days to solve without resorting to computerized assistance. Has this been done? |
|
Nov-09-22
 | | beatgiant: <al wazir>
Here's a report in chessbase about that. https://en.chessbase.com/post/a-mac... |
|
Nov-09-22 | | Olavi: <al wazir> Problems that difficult to solve are not too difficult to compose with human power only. But chess composing is an art form; difficulty of solution is unimportant, indeed it is often detrimental to the quality of a problem. Art resonates with human emotions, intellect too.
The things that Iqbal has been showing in his many articles (see <beatgiant's> link) are not what we call chess problems. |
|
Nov-14-22
 | | al wazir: <beatgiant: Here's a report in chessbase about that.> I found Iqbal's article disappointing. I hoped to see real chess problems composed by AI. The only composition shown (the miniature 3-mover at the top) is trivial. The examples in the earlier article he links to are better. |
|
Dec-01-22 | | stone free or die: Apparently neural networks get bored, the same as people do: <We have found many cases where its preferences are not stable over different training runs. We describe one such example in detail, a very important theoretical battleground in top-level human play.> https://www.pnas.org/doi/10.1073/pn... (Thanks to <CCastillo> for calling attention to that link over here: Vladimir Kramnik (kibitz #42554)) |
|
Feb-21-23
 | | Check It Out: Did Alpha 0 retire? Its last game listed here was in 2018. If it continued to "learn" over the past 5 years its chess today should be even more mind-blowing. |
|
Feb-21-23 | | stone free or die: <<CIO> Do Alpha 0 retire?> I believe the answer is basically yes. They used it to attract media attention, and as a proof-in-principle demonstration of AI before moving on. I guess AlphaZero was retooled to become the more general MuZero: https://venturebeat.com/ai/deepmind... I think you'll find a good sampling of Deepmind's latest research endeavors here: https://www.deepmind.com/research
It seems that protein folding is one of current interests. I suppose the current AI chess work is the work done with Stockfish's hybrid model, and Leela. But that's just my impression as of today. |
|
Feb-21-23
 | | fredthebear: As usual, sfod does not know, but Zappa is pretending to know. There's little reason for AlphaZero to play without a competitive opponent. https://en.chessbase.com/post/acqui... |
|
Feb-22-23 | | stone free or die: Thanks for playing Fred...
Did I miss it, or does your link have absolutely no bearing on <CIO>'s question? |
|
Feb-22-23 | | Olavi: A close relative has come against serious opposition: https://www.sciencetimes.com/articl... But it is a very different thing. |
|
Feb-22-23 | | stone free or die: <Olavi> interesting to see human kind strike back (even if they did need a computer to help find AlphaGo's weakness!). I wonder if continued training would have found and fixed that weakness. As far as having little reason to continue training, has AlphaZero answered the question of whether or not White has a forced win yet? . |
|
Feb-22-23 | | SChesshevsky: <I wonder if continued training would have found and fixed that weakness.> I have a large suspicion that AI is mainly dependent on the power/efficiency of the hardware. AlphaZero and I'm assuming AlphaGo apparently used brute force computing power to in essence find what worked best in almost any situation. I'm guessing that it was also brute force computing power that targeted in on finding AlphaGo's weakness. Likely stronger power than the Alpha's had. Of course, AI developers and hypesters aren't about to reveal that 80% (making up a number) of what they produce is due to the hardware. It's a good question. Would AlphaGo trained and running on today's top hardware be noticeably better than the previous version on its hardware? |
|
Feb-22-23
 | | beatgiant: <SChesshevsky> It's a point that's been much discussed here, and I've seen an opinion similar to this before. The AlphaZero algorithm is published in the literature and anyone can read it and see it's not just the hardware, let alone 80% (by some reasonable measurement we'd agree on). To be precise, it does take powerful hardware, but it's not just using it to achieve deeper complete lookahead (what we generally call "brute force"). If you could somehow reconfigure AlphaZero hardware into an equivalent amount of hardware (again by some measurement we'd agree on) of the top pre-AlphaZero engine, it doesn't mean that engine then plays like AlphaZero. A good proof of that is the great success of AlphaGo in Go, a game that has a much bigger search space and more complex evaluation attributes than chess. Top-level play in that game was out of reach of engines before AlphaGo. |
|
Feb-24-23
 | | Check It Out: Interesting feedback, thanks all. |
|
Oct-09-24
 | | BishopBerkeley: Sir Demis Hassabis (a co-founder of DeepMind (which developed AlphaZero) & former Chess prodigy) just won the Nobel Prize in Chemistry (shared), "for protein structure prediction" using AlphaFold2 (developed within Google DeepMind): https://www.nobelprize.org/prizes/c... Congratulations, Sir Demis! :) More: https://en.wikipedia.org/wiki/Demis... -- and, of course, his significant games! Demis Hassabis |
|
Jun-12-25 | | The Integrity: <In 2025, a new chess engine inspired by AlphaZero, built by Google DeepMind, named AZdb, is expected to be a major advancement in chess AI. AZdb is an ensemble system that combines multiple AlphaZero agents, enhancing its ability to generalize and perform in diverse chess situations. This approach aims to surpass the capabilities of the original AlphaZero, which already achieved superhuman chess performance through self-play reinforcement learning.
Key Features of AZdb:
Ensemble System:
AZdb combines multiple AlphaZero agents into a "league" to improve its overall performance and ability to handle different chess situations. Enhanced Generalization:
The ensemble system is designed to improve the engine's ability to apply its knowledge to a wider range of chess positions and scenarios, according to the-decoder.com. Reinforcement Learning:
Similar to AlphaZero, AZdb utilizes self-play reinforcement learning to improve its chess skills and understanding of the game. Superhuman Performance:
AZdb aims to build upon the impressive chess performance achieved by AlphaZero, pushing the boundaries of what AI can achieve in chess> |
|
Jun-12-25 | | stone free or die: My ever-present refrain applies:
<Links, we need links!> OK, allow me - here's one:
https://the-decoder.com/google-deep... With this additional info:
<
However, difficult chess puzzles still baffle even the strongest chess AI systems, suggesting room for improvement. Researchers at Google DeepMind are now proposing to combine several different AlphaZero agents into an ensemble system, called AZdb, to further improve its capabilities in chess and beyond. AZdb, combines multiple AlphaZero agents into a "league". <AlphaZero agents are inspired by human collaboration> Using "behavioral diversity" and "response diversity" techniques, AZdb's agents are trained to play chess in different ways. According to Google Deepmind, behavioral diversity maximizes the difference in average piece positions between agents, while response diversity exposes agents to games against different opponents. In practice, this also means that AZdb's agent will get to see many more different positions, expanding this range of in-distribution data, which should allow the system to better generalize to unseen positions. > |
|
Jun-12-25 | | stone free or die: I admit I haven't been following these developments myself, as <AZdb> is news to me. But the decoder article was noted on <reddit> from two years ago: https://www.reddit.com/r/TheDecoder... . |
|
 |
 |
< Earlier Kibitzing · PAGE 39 OF 39 ·
Later Kibitzing> |