< Earlier Kibitzing · PAGE 53 OF 53 ·
Later Kibitzing> |
May-01-23
 | | alexmagnus: May:
European Union:
1. Firouzja 2785
2. Giri 2768
3. Rapport 2745
4. Vachier-Lagrave 2742
5. Topalov 2728
6. Duda 2724
7. Vallejo 2712
8-9. Deac 2700
8-9. Keymer 2700
10. Anton 2690
Former Soviet Union:
1. Nepomniachtchi 2794
2. Radjabov 2747
3. Grischuk 2745
4. Mamedyarov 2738
5. Abdusattorov 2731
6. Sjugirov 2712
7. Dubov 2708
8. Eljanov 2705
9. Artemiev 2701
10. Tomashevsky 2694
Former British Empire:
1. Nakamura 2775
2. Caruana 2764
3. So 2760
4. Anand 2754
5. Aronian 2745
6. Doninguez 2739
7. Gukesh 2732
8. Vidit 2731
9. Shankland 2711
10. Niemann 2708
Asia:
1. Ding 2789
2. Anand 2754
3. Maghsoodloo 2734
4. Gukesh 2732
5-6. Abdusattorov 2731
5-6. Vidit 2731
7. Yu 2729
8. Le 2728
9-10. Wang 2722
9-10. Wei 2722
Born later than the world champion:
1. Firouzja 2785
2. Giri 2768
3. So 2760
4. Rapport 2745
5. Maghsoodloo 2734
6. Gukesh 2732
7-8. Abdusattorov 2731
7-8. Vidit 2731
9. Yu 2729
10. Duda 2724
Nuclear powers:
1. Nepomniachtchi 2794
2. Ding 2789
3. Firouzja 2785
4. Nakamura 2775
5. Caruana 2764
6. So 2760
7. Anand 2754
8-9. Aronian 2745
8-9. Grischuk 2745
10. Vachier-Lagrave 2742
Old Guard:
1. Anand 2754
2-3. Aronian 2745
2-3. Grischuk 2745
4. Dominguez 2739
5. Topalov 2728
6. Vallejo 2712
7. Eljanov 2705
8. Sargissian 2692
9. Adams 2680
10. Gelfand 2678 |
|
May-01-23
 | | fredthebear: The ascension of the improbable new chess world champion Ding Liren altered that category. Ding Liren, age 30 is the highest rated Chinese player ever and the first to play in the candidates matches. That is a story in itself. Ding Liren was the highest rated blitz player in the world at 2875 in 2016. GM Ding's skill at speed chess served him well with a mere minute remaining in the dramatic fourth and final rapid tiebreaker when he eschewed a certain perpetual check draw to play for the win and did indeed win in dramatic fashion over Russia's Ian Nepomniachtchi. GM Ding had found himself in time trouble throughout the match. Magnus Carlsen of Norway ruled classical chess for 10 years from 2013-2023. He is a five-time world champion. Carlsen, just days before his 23rd birthday, defeated Viswanathan Anand. He defeated Anand in a rematch in 2014. Carlsen defended his title against Sergey Karjakin in 2016, Fabiano Caruana in 2018, and Ian Nepomniachtchi in 2021. Carlsen declined to defend his title against Nepomniachtchi in 2023. |
|
May-02-23
 | | alexmagnus: <The ascension of the improbable new chess world champion Ding Liren altered that category.> Not by much though, as the age difference between Carlsen and Ding is just two years. In the "born later" category, only three players would have been included between these ten if Carlsen were still champion: Ding himself, Caruana, and Le. |
|
Jun-01-23
 | | alexmagnus: June:
European Union:
1. Firouzja 2786
2. Giri 2772
3. Rapport 2752
4. Vachier-Lagrave 2747
5. Duda 2732
6. Topalov 2728
7. Vallejo 2706
8. Van Foreest 2695
9. Shevchenko 2694
10. Deac 2693
Former Soviet Union:
1. Nepomniachtchi 2779
2. Karjakin 2750
3. Radjabov 2747
4. Mamedyarov 2740
5. Grischuk 2736
6. Abdusattorov 2732
7. Dubov 2716
8-9. Sjugirov 2699
8-9. Artemiev 2699
10-11. Eljanov 2694
10-11. Tomashevsky 2694
Former British Empire:
1. Nakamura 2775
2. Caruana 2773
3. So 2765
4. Anand 2754
5. Aronian 2742
6. Dominguez 2739
7. Gukesh 2736
8. Vidit 2712
9. Shankland 2711
10. Harikrishna 2704
Asia:
1. Ding 2780
2. Anand 2754
3. Gukesh 2736
4. Yu 2734
5. Abdusattorov 2732
6. Le 2728
7-8. Wang 2722
7-8. Wei 2722
9. Maghsoodloo 2716
10. Vidit 2712
Born later than the world champion:
1. Firouzja 2786
2. Giri 2772
3. So 2765
4. Rapport 2752
5. Gukesh 2736
6. Yu 2734
7-8. Duda 2732
7-8. Abdusattorov 2732
9. Wei 2722
10-11. Maghsoodloo 2716
10-11. Dubov 2716
Nuclear powers:
1. Firouzja 2786
2. Ding 2780
3. Nepomniachtchi 2779
4. Nakamura 2775
5. Caruana 2773
6. So 2765
7. Anand 2754
8. Karjakin 2750
9. Vachier-Lagrave 2747
10. Aronian 2742
Old Guard:
1. Anand 2754
2. Aronian 2742
3. Dominguez 2739
4. Grischuk 2736
5. Topalov 2728
6. Vallejo 2706
7. Eljanov 2694
8-9. Svidler 2692
8-9. Sargissian 2692
10. Kasimdzhanov 2673 |
|
Jul-01-23
 | | alexmagnus: July:
European Union:
1. Firouzja 2777
2. Giri 2775
3. Rapport 2752
4. Vachier-Lagrave 2739
5. Duda 2732
6. Topalov 2727
7. Vallejo 2706
8. Shevchenko 2694
9-10. Van Foreest 2693
9-10. Deac 2693
Former Soviet Union:
1. Nepomniachtchi 2779
2. Karjakin 2750
3. Radjabov 2747
4. Mamedyarov 2742
5. Grischuk 2736
6. Abdusattorov 2725
7. Dubov 2716
8. Sjugirov 2699
9. Artemiev 2698
10-11. Eljanov 2694
10-11. Tomashevsky 2694
Former British Empire:
1. Nakamura 2787
2. Caruana 2782
3. So 2769
4. Anand 2754
5. Gukesh 2744
6. Aronian 2742
7. Dominguez 2739
8. Vidit 2719
9. Shankland 2711
10. Erigaisi 2710
Asia:
1. Ding 2780
2. Anand 2754
3. Gukesh 2744
4. Yu 2735
5. Le 2728
6. Wei 2726
7. Abdusattorov 2725
8-9. Maghsoodloo 2719
8-9. Vidit 2719
10. Erigaisi 2710
Born later than the world champion:
1. Firouzja 2777
2. Giri 2775
3. So 2769
4. Rapport 2752
5. Gukesh 2744
6. Yu 2735
7. Duda 2732
8. Wei 2726
9. Abdusattorov 2725
10-11. Maghsoodloo 2719
10-11. Vidit 2719
Nuclear powers:
1. Nakamura 2787
2. Caruana 2782
3. Ding 2780
4. Nepomnaichtchi 2779
5. Firouzja 2777
6. So 2769
7. Anand 2754
8. Karjakin 2750
9. Gukesh 2744
10. Aronian 2742
Old Guard:
1. Anand 2754
2. Aronian 2742
3. Dominguez 2739
4. Grischuk 2736
5. Topalov 2727
6. Vallejo 2706
7. Eljanov 2694
8. Sargissian 2692
9. Svidler 2688
10. Kasimdzhanov 2673 |
|
Aug-01-23
 | | alexmagnus: August:
European Union:
1. Firouzja 2777
2. Giri 2769
3. Rapport 2752
4. Vachier-Lagrave 2739
5. Duda 2732
6. Topalov 2727
7. Vallejo 2706
8. Keymer 2701
9. Deac 2698
10. Van Foreest 2697
Former Soviet Union:
1. Nepomniachtchi 2779
2. Karjakin 2750
3. Mamedyarov 2747
4. Grischuk 2736
5. Abdusattorov 2725
6. Dubov 2716
7. Sjugirov 2705
8. Artemiev 2698
9. Eljanov 2695
10. Tomashevsky 2694
Former British Empire:
1. Nakamura 2787
2. Caruana 2782
3. So 2769
4. Anand 2754
5. Gukesh 2751
6. Aronian 2742
7. Dominguez 2739
8. Vidit 2723
9. Harikrishna 2711
10. Shankland 2708
Asia:
1. Ding 2780
2. Anand 2754
3. Gukesh 2751
4. Le 2740
5. Wei 2726
6. Abdusattorov 2725
7. Vidit 2723
8. Yu 2721
9. Harikrishna 2711
10. Praggnanandhaa 2707
Born later than the world champion:
1. Firouzja 2777
2-3. Giri 2769
2-3. So 2769
4. Rapport 2752
5. Gukesh 2751
6. Duda 2732
7. Wei 2726
8. Abdusattorov 2725
9. Vidit 2723
10. Yu 2721
Nuclear powers:
1. Nakamura 2787
2. Caruana 2782
3. Ding 2780
4. Nepomniachtchi 2779
5. Firouzja 2777
6. So 2769
7. Anand 2754
8. Gukesh 2751
9. Karjakin 2750
10. Aronian 2742
Old Guard:
1. Anand 2754
2. Aronian 2742
3. Dominguez 2739
4. Grischuk 2736
5. Topalov 2727
6. Vallejo 2706
7. Eljanov 2695
8. Sargissian 2692
9. Svidler 2688
10. Kasimdzhanov 2673 |
|
Aug-22-23 | | Rdb: Hello , <alexmagnus> . I recall that you had completed PhD in mathematics many years ago . Correct ? Thank you. Regards |
|
Aug-23-23
 | | alexmagnus: Not PhD, just master's (to be more precise, a diploma from before the bachelor/master system was introduced in Germany but that is equivalent to master). |
|
Aug-23-23 | | Rdb: OK. Thanks for response , <alexmagnus> Regards
2) can I ask you , if it is not intrusive , what career you chose after masters in maths ? Thank you.
Regards |
|
Aug-24-23
 | | alexmagnus: I actually couldn't find any job after my master's because I lacked programming knowledge, and most offers came from IT companies (banks and insurances are another common field for mathematicians but my financial math was not good at all). So I went on to study again, making a bachelor's in computer science. Working as a software developer since. |
|
Aug-25-23 | | Rdb: <alexmagnus: I actually couldn't find any job after my master's because I lacked programming knowledge, and most offers came from IT companies (banks and insurances are another common field for mathematicians but my financial math was not good at all).
So I went on to study again, making a bachelor's in computer science. Working as a software developer since> Thank you for sharing <alexmagnus> Regards. |
|
Aug-27-23 | | Rdb: <alexmagnus> , I have heard that machine learning/artificial intelligence requires deep understanding of all of MSc maths (master level maths ) . I was hoping that you could be in that field and give me some insights about how advanced maths is essential for machine learning/artificial intelligence . Thank you . Regards |
|
Aug-27-23
 | | alexmagnus: < I was hoping that you could be in that field and give me some insights about how advanced maths is essential for machine learning/artificial intelligence .> During my CS study I've had only some basics of neural networks. There the math was advanced but not super advanced, about halfway through the BSc study would be enough. But it was a "basic" NN course (in the end we programmed a robot that could navigate one very specific labyrinth by having learned the path by training its NN), so no idea how it is in case of, say, deep learning. |
|
Aug-28-23 | | Rdb: Thanks a lot for response <alexmagnus> I intend to start studying MSc maths soon . So , that is the context of my interest in this conversation 2) I have something more to say about it and intend to post soon here in this forum of yours that stuff , within next few days (if not today) , if you please. Thank you 🙏🙏 |
|
Sep-01-23
 | | alexmagnus: September:
European Union:
1. Firouzja 2777
2. Giri 2760
3. Rapport 2752
4-5. Vachier-Lagrave 2727
4-5. Topalov 2727
6. Duda 2726
7. Keymer 2717
8. Vallejo 2712
9. Sjugirov 2705
10. Deac 2702
Former Soviet Union:
1. Nepomniachtchi 2771
2. Karjakin 2750
3. Radjabov 2745
4. Mamedyarov 2734
5. Grischuk 2732
6. Abdusattorov 2716
7. Dubov 2710
8. Artemiev 2697
9. Eljanov 2695
10. Tomashevsky 2694
Former British Empire:
1. Caruana 2786
2. Nakamura 2780
3. Gukesh 2758
4. Anand 2754
5. So 2753
6. Dominguez 2745
7. Aronian 2742
8. Praggnanandhaa 2727
9. Vitiugov 2719
10. Vidit 2716
Asia:
1. Ding 2780
2. Gukesh 2758
3. Anand 2754
4. Le 2733
5. Praggnanandhaa 2727
6. Wei 2726
7. Yu 2720
8-9. Vidit 2716
8-9. Abdusattorov 2716
10. Erigaisi 2712
Born later than the world champion:
1. Firouzja 2777
2. Giri 2760
3. Gukesh 2758
4. So 2753
5. Rapport 2752
6. Praggnanandhaa 2727
7-8. Duda 2726
7-8. Wei 2726
9. Yu 2720
10. Keymer 2717
Nuclear powers:
1. Caruana 2786
2-3. Nakamura 2780
2-3. Ding 2780
4. Firouzja 2777
5. Nepomniachtchi 2771
6. Gukesh 2758
7. Anand 2754
8. So 2753
9. Karjakin 2750
10. Dominguez 2745
Old guard:
1. Anand 2754
2. Dominguez 2745
3. Aronian 2742
4. Grischuk 2732
5. Topalov 2727
6. Vallejo 2712
7. Eljanov 2695
8. Sargissian 2692
9. Svidler 2689
10. Kasimdzhanov 2673 |
|
Sep-06-23
 | | alexmagnus: In the beginning of October I'm not here btw, so those few who actually read my lists - it will only be here somewhere mid-October. Even though you can "compose" them yourself, I'll still write it here mid-October for archive purposes. |
|
Sep-07-23
 | | OhioChessFan: I read them, occasionally comment, always appreciate. Thanks for doing it. |
|
Sep-07-23
 | | Check It Out: Hi <alexmagnus> I just noticed your lists, interesting. Do you now mean older than WC Ding Liren? Why do you skip a list with Carlsen? Lumping India and the US as former British colony gave me a chuckle. Anyway, no criticisms, just curious. |
|
Sep-08-23
 | | alexmagnus: Old guard is everyone born in (current year-40) or before, that is, currently 1983 or before. |
|
Sep-09-23
 | | Check It Out: Oh. I meant the list, "born later than the world champion" |
|
Sep-11-23
 | | alexmagnus: That's players younger than the world champion, not older, see the list itself. Carlsen somehow manages to escape all of my lists. He could have ended up on the "born later" list if Nepo had won either of his world championship matches. And Carlsen being Carlsen, he will probably retire before he can make it to the Old Guard list. |
|
Sep-12-23
 | | OhioChessFan: I'm surprised Carlsen continues to play as much as he has. |
|
Sep-23-23 | | Rdb: https://www.quora.com/Why-is-Machin... <It’s safe to say that if you intend to be a creator of ML research rather than a consumer, you better get used to devoting a large part of your life to understanding advanced math. I speak from experience: I’m close to inaugurating my 40th year of my career as an ML researcher and it has only gotten harder to master the necessary math. I don’t mean the kid stuff, like basic linear algebra, multivariate calculus, optimization, probability and statistics. This level of math is simply the price of admission into the field. This gets you into the cheap seats in the ML research stadium. You’ll still struggle to understand the recent advances in the field. You want to really push the frontier in the field? Start getting used to reading graduate level math textbooks.Here’s a short sampling of more advanced math that is increasingly proving essential to many sub fields in modern ML. I’ll briefly explain why each of these subfields is useful, so you’ll get to understand why. Topology: Topology is the intrinsic study of the “abstract” shape of a space. Physicists classify particles. Mathematicians classify spaces. Topology is the tool par excellence for this purpose. Topological data analysis is one of the fastest growing areas and is likely to be widely applied in many areas, from bioinformatics to digital marketing. I find topology to be incredibly elegant: it is beautiful to see concepts like continuity formalized using the topology of open sets. Calculus seems pedestrian and ugly in comparison. Topology is also at the heart of probability and statistics. Finally, the most important theorem in optimization — the Hahn Banach theorem — is a simple application of topology and among its many applications is the duality theory, Lagrange multipliers and the result that feedforward neural nets can represent any smooth function.
Measure theory: one of the bedrocks of machine learning is the idea of computing distances between objects — text documents, images, DNA sequences, probability distributions, and policies in reinforcement learning — and measure theory is the subfield of mathematics that distills the essence of how to define distances. Probability was axiomatized by the Russian mathematician Kolmogorov using measure theory nearly 100 years ago and it forms the foundation of statistics. Probability distributions are defined using Lebesgue measures. The theory of integration in measure theory is the basis for defining expected values of a random variable and for defining conditional expectation and regression.
Combinatorics: one of the oldest fields in discrete math useful in counting the number of objects parameterized by some variable n that usually grows asymptotically, combinatorics yields deep insights into many problems in machine learning. The theory of which functions are learnable in polynomial time, Vapnik-Chervonenkis (VC) theory at the heart of support vector machines, builds on combinatorial results of how many functions can be defined on a space. I recently built on some beautiful results on how many partial orders can be defined on n variables. Amazingly, asymptotically, all partial orders are almost surely comprised of three layers — did you know that?
Category theory: much of the edifice of 20th century mathematics has been built on category theory, which fundamentally changed mathematics from being a study of sets and their elements to the study of morphisms and functors. It is likely that in the next 50 years, some of the most profound advances in machine learning will come from applying category theory, which finds deep unifying structure among otherwise unrelated spaces. It is not inconceivable that deep within the brain, categorical concepts are implemented in some way. Category theory alas is forbiddingly hard, but hey, who said climbing K2 was going to be easy? The most famous mathematician of the 20th century, according to many mathematicians, was Grothendieck, who did much of his work on category theory.
Sheaf theory: the theory of sheaves might also prove extremely useful in machine learning, replacing the current widespread use of manifold learning. Manifolds are a special case of sheaves. What’s a sheaf? Think of a sheaf as a graph that is adorned with some data on its vertices and edges. A weighted graph is a trivial example. Imagine each vertex of the graph having an associated vector space called a stalk, and the edge connecting two vertices as associated with linear transformations that ensures local consistency ....> |
|
Sep-24-23 | | Rdb: https://www.quora.com/How-important... <How important is a strong mathematical background in machine learning?
Mathematics is the language that modern machine learning is built on, so trying to do machine learning without knowledge of math is like trying to play classical music without being able to read scores, i.e. not very well.In essence, machine learning is about extracting structure from data: finding the patterns amongst a great deal of noise. Nothing prepares you better for a lifetime of doing machine learning as a deep knowledge of math. In fact, one can turn this question on its head: often students who took my graduate ML class at UMass came to me and marveled at how they finally understood what their linear algebra or calculus or statistics class they took eons ago was trying to teach them! The deeper one gets into a problem, the more math seems necessary to formulate and solve a machine learning task. Let’s consider an example, to make this more concrete. A fundamental challenge for machine learning is how to transfer the results of learning from one setting to another. Humans seem to be able to do this fairly effortlessly, often to a level that befuddles any machine learning approach. I just returned from a trip to Norway and Sweden. I’d never been to either country before my trip. In essence, I had no training data of actually having been in either country. Did that pose a problem. Not an iota. I just applied the basic knowledge of how to get around cities or airports from past experience. We do this as humans as easily as we breathe. Most machine learning approaches, from au courant deep learning to good old fashioned classical Gaussian least squares regression, will choke if the test data (e.g., taking a train from the airport in Stockholm or Oslo to city center) differs from the training data (e.g., prior experience of taking trains from San Francisco airport to the city center). The reason being that the statistics are entirely different. The distribution of features in Sweden or Norway is very different from San Francisco. So, how does one solve the problem of transfer learning? Let’s imagine a very basic idea of computing some aggregate statistics of the source dataset (e.g., for every feature, compute its correlation against all other features) and doing the same for the target dataset. It might be helpful to recall that the source dataset is labeled but the target dataset is not (that is, from our prior experience, we know what the signs mean in San Francisco since we speak English, but we can't interpret the signs in Stockholm or Oslo since we don't speak Swedish or Norwegian). So, we are in essence computing the covariance matrix of the source dataset and the covariance matrix of the target dataset. This matrix summarizes the second order statistics of the data. Under the simplifying assumption that the second order statistics are all that matter, in other words the distribution of features are Gaussian, we can now proceed to find a solution. The source and covariance matrices are going to be different in transfer learning since the data are coming from two different multivariate Gaussian distributions. So, any approach like using Bayesian ML to find a classifier that labels source data will fail in the target domain since the covariances have changed. Even a method like support vector machines, which finds the best hyperplane that separates positive from negative examples in the source domain cannot be applied in the target domain since the data have shifted unpredictably in the space of instances. OK, here’s a simple trick developed in a popular algorithm called CORAL (for correlational alignment). Find the matrix A such that the source covariance Σs can be transformed into something that matches the target covariance Σt. In the language of linear algebra, this amounts to solving the optimization problem (pardon the Latex): minA∥ATΣsA−Σt∥2
It ...> |
|
Sep-25-23 | | Rdb: Like I had promised , <alexmagnus> Any inputs , if you please ?
Thank you . Regards 🙏 🙏 |
|
 |
 |
< Earlier Kibitzing · PAGE 53 OF 53 ·
Later Kibitzing> |