Skip to main content

TR Memescape

  • TalkRational: Stevef is snuffalopagus???

Topic: AI playing Go (Read 243 times) previous topic - next topic

0 Members and 1 Guest are viewing this topic.
  • SkepticTank
  • Global Moderator
  • Calmer than you are

Re: AI playing Go
Reply #1
There are video game records of at least three of the informal online games between Ke Jie and AlphaGo available. Out of the three, Ke Jie won two (though more were played, and I'm not sure what the final overall score was). If there are any Go players here who might be interested, below is the first, which was won by AlphaGo.

https://www.youtube.com/watch?v=5zrFUTqFGUk
  • Last Edit: April 10, 2017, 09:17:23 AM by Recusant

Re: AI playing Go
Reply #2
DeepMind has built a new version of AlphaGo--AlphaGo Zero. It was able to learn to play Go on its own from being given the rules, and went on to beat the original AlphaGo in every game it played.

"'It's able to create knowledge itself': Google unveils AI that learns on its own" | The Guardian

Quote
Google's artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo - an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.

"For us, AlphaGo wasn't just about winning the game of Go," said Demis Hassabis, CEO of DeepMind and a researcher on the team. "It was also a big step for us towards building these general-purpose algorithms." Most AIs are described as "narrow" because they perform only a single task, such as translating languages or recognising faces, but general-purpose AIs could potentially outperform humans at many different tasks. In the next decade, Hassabis believes that AlphaGo's descendants will work alongside humans as scientific and medical experts.

Previous versions of AlphaGo learned their moves by training on thousands of games played by strong human amateurs and professionals. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board at random but swiftly improved as it discovered winning strategies.

[Continues . . .]

  • ksen
Re: AI playing Go
Reply #3
I was looking for a good Go game for my phone but the ones I looked at l wasn't sure about.

Any good PC or mobile versions of the game you're aware of?

Re: AI playing Go
Reply #4
It's been a while since I played with computer Go programs. For PC, you could try Fuego and see what you think. There is also GNU Go, which has been around for a long time, but you need to hook it up to a GUI like MultiGo to use it. All those are free, but if you're willing to spend some money, Many Faces of Go is very good (I have an older version of that which isn't too bad for a kyu level player like myself). I expect that unless you've been playing Go for a while, either Fuego or GNU Go would be satisfactory.

For mobile, Google Play has three free programs. I use a dumbfone, so can't tell you whether any of these are good.

You can also play the 'bots' on IGS or KGS (uses a Java client). Those are 'live' servers, but you can also play 'turn-based' correspondence style Go on Dragon Go Server, while OGS offers both turn-based and live games. All of these servers are mostly for playing with other Go players, but offer Go playing programs (the 'bots' I mentioned above) as well.

You can take a look around Sensei's Library, which has a lot of information about Go programs, though you'll find that some of the links are out of date.
  • Last Edit: October 19, 2017, 10:47:29 AM by Recusant

Re: AI playing Go
Reply #5
The Atlantic has an article about professional players' reactions to the play of these programs, which also includes a bit of famous Go history.

"The AI That Has Nothing to Learn From Humans" | The Atlantic

Quote
Now that AlphaGo's arguably got nothing left to learn from humans--now that its continued progress takes the form of endless training games against itself--what do its tactics look like, in the eyes of experienced human players? We might have some early glimpses into an answer.

AlphaGo Zero's latest games haven't been disclosed yet. But several months ago, the company publicly released 55 games that an older version of AlphaGo played against itself. (Note that this is the incarnation of AlphaGo that had already made quick work of the world's champions.) DeepMind called its offering a "special gift to fans of Go around the world."

Since May, experts have been painstakingly analyzing the 55 machine-versus-machine games. And their descriptions of AlphaGo's moves often seem to keep circling back to the same several words: Amazing. Strange. Alien.

"They're how I imagine games from far in the future," Shi Yue, a top Go player from China, has told the press. A Go enthusiast named Jonathan Hop who's been reviewing the games on YouTube calls the AlphaGo-versus-AlphaGo face-offs "Go from an alternate dimension." From all accounts, one gets the sense that an alien civilization has dropped a cryptic guidebook in our midst: a manual that's brilliant--or at least, the parts of it we can understand.

[Continues . . .]

  • ksen
Re: AI playing Go
Reply #6
9-D Go

eta: thanks for the suggestion upthread! :hug:

Re: AI playing Go
Reply #7
My pleasure.   :)

Re: AI playing Go
Reply #8
An addendum to the suggestions for a Go playing program. I came across the new version of Leela, a strong program for computers that is free, and have heard good things about it. It uses Chinese rules, but they aren't that much different from the Japanese rules that many American/European Go players know. A couple of the primary differences are that dame (neutral points) are generally all filled in the game, and prisoners aren't counted.
  • Last Edit: October 24, 2017, 11:44:57 AM by Recusant

Re: AI playing Go
Reply #9
DeepMind's AlphaZero is now trouncing chess and shogi programs, being "self-taught" in both.

"DeepMind's AI became a superhuman chess player in a few hours, just for fun" | The Verge

Quote
The end-game for Google's AI subsidiary DeepMind was never beating people at board games. It's always been about creating something akin to a combustion engine for intelligence -- a generic thinking machine that can be applied to a broad range of challenges. The company is still a long way off achieving this goal, but new research published by its scientists this week suggests they're at least headed down the right path.

In the paper, DeepMind describes how a descendant of the AI program that first conquered the board game Go has taught itself to play a number of other games at a superhuman level. After eight hours of self-play, the program bested the AI that first beat the human world Go champion; and after four hours of training, it beat the current world champion chess-playing program, Stockfish. Then for a victory lap, it trained for just two hours and polished off one of the world's best shogi-playing programs named Elmo (shogi being a Japanese version of chess that's played on a bigger board).

One of the key advances here is that the new AI program, named AlphaZero, wasn't specifically designed to play any of these games. In each case, it was given some basic rules (like how knights move in chess, and so on) but was programmed with no other strategies or tactics. It simply got better by playing itself over and over again at an accelerated pace -- a method of training AI known as "reinforcement learning."

[Continues . . .]

That brief mention doesn't do credit to shogi, which is a fascinating game in which captured pieces become essentially "paratroops" for the capturing side.
  • Last Edit: December 09, 2017, 03:10:31 PM by Recusant

  • Pandora
  • Resurrected Robot
Re: AI playing Go
Reply #10
Next step:  develop an AI that can infer rules from observation.
Just because you're unique doesn't mean you're useful.