Just look to Chess. The top players today are way better than any of the greats before, because they can train against computers and know exactly where they failed. That said, because they've gotten so good, chess at the top levels is pretty boring... it's hard to come up with a unique strategy so players tend to be defensive. Lots of ties.
On the other hand, chess is more popular than ever. It's huge in high schools. I see people playing it everywhere. I know that for me, I love being able to play a game and then view the computer analysis afterwards and see exactly what I did wrong (granted, sometimes a move can be good for a computer who will know how to follow through on the next 10 moves, but not necessarily good for me... but most of the time I can see where I made a mistake when the computer points it out).
Side note: I play on LIChess and it's great. Is there an equivalent app for Go?
The defensiveness has absolutely nothing to do with better computers and the improvements in play that came with it, but with tournaments where risk taking is an economic disaster. As others have said, there aren't massive numbers of ties in the candidates tournament, because the difference in value between being first and second is so massive that if you aren't first, you are last.
Compare this to regular high level chess in the Grand Chess Tour: It's where most of your money is going to come from if you are a top player. Invitation to the tour as a regular is by rating, and there's enough money at the bottom of the tour than the difference between qualifying or not is massive. Therefore, the most important thing is to stay in the tour train. Lose 20 points of rating, and barring Rex Sinquefield deciding to sponsor your life out of the goodness of his heart, you might as well spend time coaching, because there are so few tournaments where there's a lot of money.
This also shows in the big difficulties for youngsters that reach 2650 or so: They are only going to find good enough opponents to move up quickly in a handful of events a year where people with higher rating end up risking their rating against them. See how something like the US championship is a big risk for the top US professionals, because all the young players that show are at least 50 points underrated, if not more.
This is what causes draws, not computer prep. Anand was better at just drawing every game in every tournament back when he was still on the tour, and yet computers were far worse than today, especially with opening theory.
And it simply doesn’t have to be this way. The top tournaments could just use a prior qualification tournament with an open Swiss. Then invite the top finishers from the open Swiss to participate in the round robin. Can reserve an invitational wildcard or two but the rest should have to earn their place.
I think that tennis solved the problem by not using an ELO based score but giving points by the number of turns a player wins in a tournament. The most important tournaments give more points. All points are lost after one year. Of course tennis and chess differ in a fundamental way: there is no draw in tennis and tournaments are basically never round robins. The ATP finals have a couple of round robins before the semi finals. They give points for the wins.
So maybe in chess they could give points for each win, less than half of those points for a draw, zero for a loss.
Tradition is very important so they should keep the ELO and keep updating it according to who wins against whom, but qualifications to tournaments and seeding (if that's a thing in chess) would be based on the other score. There could be wild cards to let some strong or popular players play even if they don't have a good score. Tennis pro associations have provisions in place for players that are forced to miss tournaments because of injuries, etc.
So they need to mandate a promotion and relegation system for the top levels. Force players in the top flight to beat at least some of their opponents, or get replaced by top players in the next lower tiers.
I think that would increase spectator interest even more. In football, relegation battles can be almost as compelling as the title race..
FWIW, I find the classical chess tournaments with the super GMs to be fairly interesting, if only because the focus of the games is more about the metagame than about the game itself.
The article linked at the bottom of the source is a WSJ piece about how Magnus beats the best players because of the "human element".
A lot about the games today are about opening preparation, where the goal is to out-prepare and surprise your opponent by studying opening lines and esoteric responses (somewhere computer play has drastically opened up new fields). Similarly, during the middle/end-games, the best players will try to force uncomfortable decisions on their opponents, knowing what positions their opponents tend to not prefer. For example, in the candidates game round 1, Fabiano took Hikari into a position that had very little in the way of aggressive counter-play, effectively taking away a big advantage that Hikaru would otherwise have had.
Watching these games feels somewhat akin to watching generals develop strategies trying to out maneuver their counterparts on the other side, taking into consideration their strengths and weaknesses as much as the tactics/deployment of troops/etc.
I think you would see fewer ties if players got 0.2 points each for draws instead of 0.5 points each for draws.
It makes the risk of going for a risky strategy lower (you only drop 0.2 pts instead of 0.5 vs getting an easy draw) and it makes the rewards much greater... a single win and 4 losses scores the same as 5 draws.
you wont see players doing intentional draws anymore either
One issue with this is that it encourages collusion. If you're a top GM playing someone of equal skill, it's +EV to agree to flip a coin beforehand to determine who will win (and then play a fake game) rather than playing it for real.
Some chess tournaments have experimented with giving 1/3 point for draws instead of 1/2 and it didn't really change much. Mostly it acted as a tiebreaker, which you could have done by just using "most wins" as a tiebreaker anyway.
My favorite idea (not mine) for creating decisive results in chess is that when a draw is agreed, you switch sides and start a new game, but don't reset the clocks.
Another possible solution would be to simply... remove draws from the game. Instead of checkmating the goal becomes to capture the opponent's king.
Needless to say, no one likes this idea because it throws out of the window centuries of game theory. Endgames would be completely different. I'm not convinced it would be a less interesting game, though.
https://online-go.com/ is the easiest place to get started as a western beginner. The far more active go servers are Asian and have a higher barrier to entry in terms of registration, downloading the client, and dealing with poor localization. (Fox Weiqi, Tygem, etc.)
Second round of the Candidates tournament played Friday had 4/4 decisive games[1]. In general, a tie might be the most common result but even at the highest level there tend to be chances for both sides.
It's really up to the players. SuperGMs these days are somewhat addicted to draws because it's a very safe result in a tournament setting and in terms of rating. Therefore these players tend to favour less risky and more calculable openings. They care more about avoiding a loss than they do about winning.
The idea that the large amount of draws is because players are so strong now, is mostly a myth. It's really just psychology and game theory at work.
For a perfect illustration of all my points, look at Aronian vs Grischuk from the 2018 candidates tournament. Here both players chose to play into complications, and the resulting game was wildly complex, with both players making several suboptimal moves simply because the position was just too complex even for two of the strongest calculators in the game at the time.
And in the end, they still ended up constructing a draw by repetition when all 3 results were still possible. Both players had good winning chances, yet the fear of losing finally overtook them and they collectively bailed out of the game.
It's not that players are now so strong it's almost impossible to win, the players just aren't as willing to seek out the necessary positions.
One nice thing about Go is there are no ties. This is offset by how boring the end games are though and having to count. Chess has explosive and exciting endings, Go just kind of fizzles out at some point.
I recommend goQuest (mobile app), and playing 9x9 go. I used to play on KGS, but it is less crowded now (the problem is that there are too many servers: OGS, IGS, Tygem, Wbadul, etc
and no one dominates, therefore you wait for the game, you need a rating, etc. Most are not very modern, mobile unfriendly, etc.). Also 19x19 takes too much time for me when comparing to chess, 9x9 is perfect, and goQuest has many active players, after a few seconds you get a match (they offer 13x13 and 19x19, but those are less active I suppose).
> Just look to Chess. The top players today are way better than any of the greats before, because they can train against computers and know exactly where they failed.
AlphaGo isn't available for anyone to train against like Stockfish is though, what are Go players using? Has another powerful Go engine been developed since then?
We use KataGo and sometimes LeelaZero (which is a replication of the AlphZero paper). KataGo was trained with more knowledge of the game (feature engineering and loss engineering), so it trained faster. It was also trained on different board sizes and to play to get a good result when it's already behind or ahead.
> That said, because they've gotten so good, chess at the top levels is pretty boring
Yeah, I feel the same thing about Magic formats when the pros play. When a format is new and people are discovering, and they have to rely on their gut, and make educated guesses. That's when it's fun to play and watch.
Back when I was a kid learning go, I was taught that the kick joseki (a standard sequence of moves, similar to chess opening) [1] is a bad move, and you were considered trolling (and the teacher would not be pleased) if you played a 3-3 invasion [2] during the opening phase. These are all vindicated thanks to the AI and played pretty commonly nowadays. AI definitely helped eliminate many dogma and myths in go.
3-3 invasion takes teritory at the expense of influence (future potential).
I think I improved a lot when I stopped 3-3ing (it opened up different style of game for me)
Noobies love 3-3 (I definitely did), because it’s kind of simple and familiar move. (Especially at the start of the game, when board is empty, there is gadzilion of possibilities and most of them unknown and possibly risky)
If not discouraging 3-3, I would still recommend starting without it, to learn that way of play (if for nothing else, to deal with 3-3 invasions)
Agreed. With a 3-3 you are trading away influence in favour of hard territory. AI is happy to do that very early because AI can effectively destroy influence. Human players need to learn to enter 3-3 at "the last possible moment". That requires judgement.
The other possibility is that it destroyed the incidental dogma that tends to build up in these types of games and human activities. This is why I like the "hacker ethos" as much as I do, it tends to eschew things like "accepted" dogma in order to find additional performance that other people were just leaving on the table out of polite comfort.
The dogma generally becomes accepted because it outperforms other known strategies. In a game like Go, that could previously take a while because there are so many possible follow-ups that it takes time to accumulate enough data on whether a new strategy is actually decisively better, or just worse but over-performing because it's less known.
There's a big difference between those two and "the hacker ethos" will lead to a lot of the latter. However, now computers can simulate enough games to give a relatively high degree of confidence that a variation in strategy is truly better.
I don't know how it's developed since, but from what I remember that was how it started - the AIs weren't following the standard moves (joseki) that we'd built up over centuries and human players were thrown off by the nonstandard responses that were working better than expected.
So the progress of human proficiency in Go and our collective advancement over time is hindered by dogmatic rules introduced over time. These rules predispose players toward specific strategies and consequently limit the scope of our creative potential within the game. In contrast, AI algorithms operate without such biases offer a unique advantage in overcoming these limitations. They essentially inspire us to get out of established patterns (or local minima) of play and broaden the range of our strategic moves.
This is the tip of the iceberg, right? It's foreshadowing AI helping experts become better. I can see it happening in a lot of creative fields, including software. Perhaps this is where it really pulls the experts from the juniors, because only experts will be able to judge whether the AI has helped him create something actually good.
Go is a constructed game with a precise definition of the rules and victory.
The real challenge with AI helping experts is whether it can correctly help them balance their own value function for what "better" means. And whether we can still train human experts who can think about that independently with good judgement, if we've automated away the things that beginners would normally do to train their judgement with black boxes that they can't interrogate.
I'd say GPT4 has definitely helped me become a better programmer - I'm able to ask it questions, learn how I can refactor my code better, or approach a problem in a way I might not have considered.
It does hit its limits, but it's been so useful - it's a funny cycle of training AI and having it train us, a great symbiotic relationship.
Maybe I'm too small minded but I would love to see AI like this enhance...well, the AI in games in general. I long for the day where I no longer play Civilization or an RTS against AI that has perfect knowledge, or is given handicaps to allow it to be competitive.
Flight simulators are set to benefit from AI. Imagine talking to an AI Air Traffic Control that understands natural language. Imagine walking down the aisle of your plane and overhearing AI people's conversations.
> It's foreshadowing AI helping experts become better.
The humans are still way worse than the Go programs. People are still willing to pay them to play a game as entertainment. Are lots of people willing to pay you to do whatever it is you do even when AIs do it much better, out of sheer entertainment value & sentimentality? If they are willing to pay you in particular, how many other people like you are they also willing to pay for, and is that number much greater than or much less than the current number paid to do it?
People are also cheering for Usain Bolt or whoever is the speediest runner this year, in spite of being able to outrun him by simply getting into a car...
It's exactly like the invention of agriculture. Not having to hunt for food gave more opportunities for intellectual pursuits because of having more free time.
I'm skeptical of this argument. It gave free time to some people i.e. the landed gentry but also created the toiling peasants and a hierarchical civilization.
When you read Go strategy resources, you see a lot of things divided into what best practices were before AlphaGo and what they are now. It's a whole big thing.
It is still the case, though, that AI dominates humans at Go; humans didn't get so creative about the game that they put AI back on its toes (though some did discover exploitable AI "strategy bugs").
The "strategy bugs" are a symptom of a more general shortcoming and why 2024 AI is still basically dumber than a mouse.
Keel in mind that if you had a variation of Go where there was a "hole" in the middle of the board, both Lee Sedol and a competent amateur would be able to play competent "Doughnut Go" without any prior experience. But AlphaGo and its successors would certainly make a ton of dumb unforced errors unless it practiced at least a few hundred games. (I am basing this observation on similar experiments with a similar Breakout AI, not sure if these experiments have been done with Go.)
Mammals, including humans, have advanced brains because we evolved to solve weird and unexpected problems with moderate reliability, not to optimize well-known benchmarks with high reliability. (This is also why plants are green instead of black.) By contrast, AlphaGo is a machine designed to solve a highly specific problem. The whole point of machines is that they dominate humans at specific tasks, otherwise we would just use a human. But we don't describe bulldozers as "superhuman" unless we're being intentionally obscure; the same should apply to AI. Otherwise we risk assuming the AI is capable of things it probably can't do without retraining.
Agreed, but I still think humans should get a little more credit for winning against AI no matter how. Its a competitive game with very simple and clear rules. A hole in AI strategy is a hole, even if quickly patched!
I am still so impressed that Lee Sedol beat Alpha Go 1 game out of 5 way back when AI made its breakout. I was sad he felt so sheepish afterward for losing. In hindsight, I think it was an amazing accomplishment even if today an AI could beat Shin Jin-Seo (#1 player) 100 out of 100 times!
When I was playing seriously there were strong players who played a ton over a board and had deep intuition about what made plays good and what made plays bad. In the late 1990s/ early 2000s there started to be a lot more in the way of computer simulation and analysis and some very strong computer players.
One (general) example was that older players liked the idea of making longer plays using more tiles to "win" a race to the S and blank tiles (the best tiles in the bag). Computer simulations generally show that turnover (as this is called) isn't optimal and you're better off holding strong combinations of letters rather than playing them off hoping to draw something better.
Now younger players are better than ever because all of their training came with the help of computer analysis and simulation.
Of course in Scrabble a huge part of it comes down to just memorizing the words in the dictionary.
> When you read Go strategy resources, you see a lot of things divided into what best practices were before AlphaGo and what they are now. It's a whole big thing.
Yes and no. The biggest takeaway from AI is that learning all the joseki doesn't actually matter that much, which has freed up players (except for the pros) to spend more of their time focusing on the more fun and interesting parts of the game.
There are a lot of videos showing what josekis and strategies the AI recommends, but as a human you're likely not going to be any better off following them. This is for the same reason why AI analysis of fights is largely useless. That is, the reason why you lost the big fight (and the game) isn't that you didn't find that one obscure 9P move that could have saved you, but rather than you let yourself get cut 50 moves earlier. But the AI will never show you the move where you got cut as being the reason why you lost the game, it will only show you the one random move that you'd never in a million years actually be able to find.
You also see a similar division to 19th century and 20th century when a player called Go Seigen changed the way the game is player even more, I feel, than AI did (but don't take my word, at 7kyu I'm far from qualified to understand how professionals play)
The article is misleading regarding the history of chess. Magnus excepted, most top players did adopt a more cold and calculating material-focused chess style that mimicked Deep Blue and subsequent chess computers. It was only with the success of AlphaGo and LC0 that top chess players have started playing a more creative playstyle again, playing various wing pawn advances, as well as being more willing to give up material for nebulous initiative or positional advantages.
> Shin et al calculate about 40 percent of the improvement came from moves that could have been memorized by studying the AI. But moves that deviated from what the AI would do also improved, and these “human moves” accounted for 60 percent of the improvement.
I don't often play Go myself but a number of my friends do. Among non-professional players, it is really common to see game play being not as exciting as before because there's now an easy way: just memorize and copy what the AI does. I don't doubt that professional players still have a ton of creativity, but a lot of non-pros don't really have too much creativity and the whole game becomes memorizing and replicating AI moves.
> Among non-professional players, it is really common to see game play being not as exciting as before because there's now an easy way: just memorize and copy what the AI does
This is just… not true.
Unless one is playing at high dan ranks, it’s trivially easy to induce a “memorized sequence” that your opponent either will not have memorized or will leave them with a situation that they don’t understand well enough to capitalize on.
The “slack moves” in the openings that pros talk about are often worth 1.5 points or less (often a fraction of a point), and that assumes pro-level follow up.
This pro-level follow up is laughably rare outside of strong amateur dan levels and pro levels (and even within those ranks there are substantial differences).
Before that, weak amateurs were just replicating human joseki. That's nothing new. They definitely give a player a good start, but knowing which to use and when, and of course how to follow up until the game is over is no simple task. It also happens to be the case that AlphaGo, KataGo etc. prefer simplifying the board state. Remove complexity and win only by a thin margin, because that's all that's needed. Memorizing AI preferences is much easier than some of these highly complicated joseki.
You'd be surprised. Joseki are corner shapes, which might interact with other corners in the medium to long run, but whose interactions are way too difficult for any human to understand well. Therefore, you have 4 corners, and it's quite likely that you'll see 4 joseki getting played in any game. Joseki sequences have been studied for a long time, so they can be relatively long: Say, 15+ moves of an avalanche joseki, memorized by both players, and that's just one corner. So even before computers were any good, you could still see pretty iffy players using memorized patterns in every corner for a total way past 20 moves.
The accepted approach used to be that the direction of play mattered. Now the AI has told us that no, just get locally-even results in all corners and you're fine. I never would've guessed!
Perhaps sub-positions still repeat with some regularity? Meaning subsets of the board. I have never played Go however, I've only seen the board and read the rules.
On the other hand, chess is more popular than ever. It's huge in high schools. I see people playing it everywhere. I know that for me, I love being able to play a game and then view the computer analysis afterwards and see exactly what I did wrong (granted, sometimes a move can be good for a computer who will know how to follow through on the next 10 moves, but not necessarily good for me... but most of the time I can see where I made a mistake when the computer points it out).
Side note: I play on LIChess and it's great. Is there an equivalent app for Go?
Compare this to regular high level chess in the Grand Chess Tour: It's where most of your money is going to come from if you are a top player. Invitation to the tour as a regular is by rating, and there's enough money at the bottom of the tour than the difference between qualifying or not is massive. Therefore, the most important thing is to stay in the tour train. Lose 20 points of rating, and barring Rex Sinquefield deciding to sponsor your life out of the goodness of his heart, you might as well spend time coaching, because there are so few tournaments where there's a lot of money.
This also shows in the big difficulties for youngsters that reach 2650 or so: They are only going to find good enough opponents to move up quickly in a handful of events a year where people with higher rating end up risking their rating against them. See how something like the US championship is a big risk for the top US professionals, because all the young players that show are at least 50 points underrated, if not more.
This is what causes draws, not computer prep. Anand was better at just drawing every game in every tournament back when he was still on the tour, and yet computers were far worse than today, especially with opening theory.
So maybe in chess they could give points for each win, less than half of those points for a draw, zero for a loss.
Tradition is very important so they should keep the ELO and keep updating it according to who wins against whom, but qualifications to tournaments and seeding (if that's a thing in chess) would be based on the other score. There could be wild cards to let some strong or popular players play even if they don't have a good score. Tennis pro associations have provisions in place for players that are forced to miss tournaments because of injuries, etc.
I think that would increase spectator interest even more. In football, relegation battles can be almost as compelling as the title race..
The article linked at the bottom of the source is a WSJ piece about how Magnus beats the best players because of the "human element".
A lot about the games today are about opening preparation, where the goal is to out-prepare and surprise your opponent by studying opening lines and esoteric responses (somewhere computer play has drastically opened up new fields). Similarly, during the middle/end-games, the best players will try to force uncomfortable decisions on their opponents, knowing what positions their opponents tend to not prefer. For example, in the candidates game round 1, Fabiano took Hikari into a position that had very little in the way of aggressive counter-play, effectively taking away a big advantage that Hikaru would otherwise have had.
Watching these games feels somewhat akin to watching generals develop strategies trying to out maneuver their counterparts on the other side, taking into consideration their strengths and weaknesses as much as the tactics/deployment of troops/etc.
https://www.chess.com/news/view/2024-fide-candidates-tournam...
Mistakes on both sides, including the side that presumably prepared this line with help from computers.
1. https://www.youtube.com/watch?v=7a4HF3dIcuo
It makes the risk of going for a risky strategy lower (you only drop 0.2 pts instead of 0.5 vs getting an easy draw) and it makes the rewards much greater... a single win and 4 losses scores the same as 5 draws.
you wont see players doing intentional draws anymore either
Some chess tournaments have experimented with giving 1/3 point for draws instead of 1/2 and it didn't really change much. Mostly it acted as a tiebreaker, which you could have done by just using "most wins" as a tiebreaker anyway.
My favorite idea (not mine) for creating decisive results in chess is that when a draw is agreed, you switch sides and start a new game, but don't reset the clocks.
Before that is was 2 points for a win, 1 point each for a draw.
In 1981 they made it 3 points for a win, and the sport has had substantially more offensive play since.
Needless to say, no one likes this idea because it throws out of the window centuries of game theory. Endgames would be completely different. I'm not convinced it would be a less interesting game, though.
https://www.gokgs.com/
and this is the web client:
https://shin.gokgs.com/
The homepage hasn't had a redesign since at the latest 2007, but the community is great an there are top players on there.
[1]: https://lichess.org/broadcast/fide-candidates-2024--open/rou...
The idea that the large amount of draws is because players are so strong now, is mostly a myth. It's really just psychology and game theory at work.
For a perfect illustration of all my points, look at Aronian vs Grischuk from the 2018 candidates tournament. Here both players chose to play into complications, and the resulting game was wildly complex, with both players making several suboptimal moves simply because the position was just too complex even for two of the strongest calculators in the game at the time.
And in the end, they still ended up constructing a draw by repetition when all 3 results were still possible. Both players had good winning chances, yet the fear of losing finally overtook them and they collectively bailed out of the game.
It's not that players are now so strong it's almost impossible to win, the players just aren't as willing to seek out the necessary positions.
Deleted Comment
https://senseis.xmp.net/?GoServers
I used to play on KGS[0] via GoUniverse Chrome plugin [1]. Not sure if there are enough players there today. Fox and Tygem are huge.
[0] http://www.gokgs.com/
[1] https://chromewebstore.google.com/detail/gouniverse/iejedhnb...
AlphaGo isn't available for anyone to train against like Stockfish is though, what are Go players using? Has another powerful Go engine been developed since then?
It likely surpasses AlphaGo, and just like Stockfish, it delivers a protocol that can hook into many user interface apps: https://github.com/lightvector/KataGo?tab=readme-ov-file#gui...
From those technologies, also came an interesting visualisation of how human players changed their habits following AlphaGo: https://drive.google.com/file/d/16-ntvk3D1_pgjJ7u64t4jMYMh0z...
KaTrain is a good frontend.
Yeah, I feel the same thing about Magic formats when the pros play. When a format is new and people are discovering, and they have to rely on their gut, and make educated guesses. That's when it's fun to play and watch.
Dead Comment
[1] https://senseis.xmp.net/?44PointLowApproach#toc6
[2] https://senseis.xmp.net/?33PointInvasion#toc2
I think I improved a lot when I stopped 3-3ing (it opened up different style of game for me)
Noobies love 3-3 (I definitely did), because it’s kind of simple and familiar move. (Especially at the start of the game, when board is empty, there is gadzilion of possibilities and most of them unknown and possibly risky)
If not discouraging 3-3, I would still recommend starting without it, to learn that way of play (if for nothing else, to deal with 3-3 invasions)
There's a big difference between those two and "the hacker ethos" will lead to a lot of the latter. However, now computers can simulate enough games to give a relatively high degree of confidence that a variation in strategy is truly better.
Dead Comment
The real challenge with AI helping experts is whether it can correctly help them balance their own value function for what "better" means. And whether we can still train human experts who can think about that independently with good judgement, if we've automated away the things that beginners would normally do to train their judgement with black boxes that they can't interrogate.
Will be interesting for sure.
It does hit its limits, but it's been so useful - it's a funny cycle of training AI and having it train us, a great symbiotic relationship.
Dead Comment
The humans are still way worse than the Go programs. People are still willing to pay them to play a game as entertainment. Are lots of people willing to pay you to do whatever it is you do even when AIs do it much better, out of sheer entertainment value & sentimentality? If they are willing to pay you in particular, how many other people like you are they also willing to pay for, and is that number much greater than or much less than the current number paid to do it?
It is still the case, though, that AI dominates humans at Go; humans didn't get so creative about the game that they put AI back on its toes (though some did discover exploitable AI "strategy bugs").
Keel in mind that if you had a variation of Go where there was a "hole" in the middle of the board, both Lee Sedol and a competent amateur would be able to play competent "Doughnut Go" without any prior experience. But AlphaGo and its successors would certainly make a ton of dumb unforced errors unless it practiced at least a few hundred games. (I am basing this observation on similar experiments with a similar Breakout AI, not sure if these experiments have been done with Go.)
Mammals, including humans, have advanced brains because we evolved to solve weird and unexpected problems with moderate reliability, not to optimize well-known benchmarks with high reliability. (This is also why plants are green instead of black.) By contrast, AlphaGo is a machine designed to solve a highly specific problem. The whole point of machines is that they dominate humans at specific tasks, otherwise we would just use a human. But we don't describe bulldozers as "superhuman" unless we're being intentionally obscure; the same should apply to AI. Otherwise we risk assuming the AI is capable of things it probably can't do without retraining.
I am still so impressed that Lee Sedol beat Alpha Go 1 game out of 5 way back when AI made its breakout. I was sad he felt so sheepish afterward for losing. In hindsight, I think it was an amazing accomplishment even if today an AI could beat Shin Jin-Seo (#1 player) 100 out of 100 times!
When I was playing seriously there were strong players who played a ton over a board and had deep intuition about what made plays good and what made plays bad. In the late 1990s/ early 2000s there started to be a lot more in the way of computer simulation and analysis and some very strong computer players.
One (general) example was that older players liked the idea of making longer plays using more tiles to "win" a race to the S and blank tiles (the best tiles in the bag). Computer simulations generally show that turnover (as this is called) isn't optimal and you're better off holding strong combinations of letters rather than playing them off hoping to draw something better.
Now younger players are better than ever because all of their training came with the help of computer analysis and simulation.
Of course in Scrabble a huge part of it comes down to just memorizing the words in the dictionary.
Yes and no. The biggest takeaway from AI is that learning all the joseki doesn't actually matter that much, which has freed up players (except for the pros) to spend more of their time focusing on the more fun and interesting parts of the game.
There are a lot of videos showing what josekis and strategies the AI recommends, but as a human you're likely not going to be any better off following them. This is for the same reason why AI analysis of fights is largely useless. That is, the reason why you lost the big fight (and the game) isn't that you didn't find that one obscure 9P move that could have saved you, but rather than you let yourself get cut 50 moves earlier. But the AI will never show you the move where you got cut as being the reason why you lost the game, it will only show you the one random move that you'd never in a million years actually be able to find.
This video from Shygost sums up the most important strategy stuff that you actually need to know in order to get strong: https://www.youtube.com/watch?v=ig8cWuDSHTg
Rilke, „The Beholder“
https://martyrion.blogspot.com/2009/11/man-watching-der-scha...
I don't often play Go myself but a number of my friends do. Among non-professional players, it is really common to see game play being not as exciting as before because there's now an easy way: just memorize and copy what the AI does. I don't doubt that professional players still have a ton of creativity, but a lot of non-pros don't really have too much creativity and the whole game becomes memorizing and replicating AI moves.
This is just… not true.
Unless one is playing at high dan ranks, it’s trivially easy to induce a “memorized sequence” that your opponent either will not have memorized or will leave them with a situation that they don’t understand well enough to capitalize on.
The “slack moves” in the openings that pros talk about are often worth 1.5 points or less (often a fraction of a point), and that assumes pro-level follow up.
This pro-level follow up is laughably rare outside of strong amateur dan levels and pro levels (and even within those ranks there are substantial differences).
That makes no sense. After 10-20 moves you are surely in a position that has never been played before. How do you memorize moves after that?