Tag Archives: Google DeepMind

Game Theory reveals the Future of Deep Learning

by Carlos E. Perez, Medium

If you’ve been following my articles up to now, you’ll begin to perceive, what’s apparent to many advanced practitioners of Deep Learning (DL), is the emergence of Game Theoretic concepts in the design of newer architectures.

This makes intuitive sense for two reasons. The first intuition is that DL systems will eventually need to tackle situations with imperfect knowledge. In fact we’ve already seen this in DeepMind’s AlphaGo that uses partial knowledge to tactically and strategically best the world-best human in the game of Go. Continue reading Game Theory reveals the Future of Deep Learning

New AI To Take On World’s Best Poker Players

by Patrick Caughill, Futurism

RISE OF THE (GAMING) MACHINES

As time progresses, we hear of more and more artificial intelligence (AI) systems being developed that can defeat the world’s greatest game players. It’s pretty much a constant one-upping of Tall Tales for the information age. But instead of John Henry versus the steam-powered hammer at steel driving, we began our new era in 1997 with Garry Kasparov versus IBM’s Deep Blue at chess. Continue reading New AI To Take On World’s Best Poker Players

Artificial Intelligence Is More Artificial Than Intelligent

by Assaf Baciu, WIRED

DEEPMIND has surpassed the human mind on the Go board. Watson has crushed America’s trivia gods on Jeopardy. But ask DeepMind to play Monopoly or Watson to play Family Feud, and they won’t even know where to start. Because these artificial intelligence engines weren’t specifically designed to play these games and aren’t smart enough to figure them out by themselves, they’ll give nonsensical answers. They’ll struggle greatly, and humans will outperform them—by a lot. Continue reading Artificial Intelligence Is More Artificial Than Intelligent

Google DeepMind could invent the next generation of AI by playing Starcraft 2

by Nick Cowen, ARS Technica

How Google’s AI research team has teamed up with Blizzard to further deep learning in AI

The announcement at BlizzCon 2016 that met with the most muted response was arguably the most revolutionary.

While new content for the likes of Hearthstone, Heroes of the Storm, Overwatch, and Diablo III drew appreciative roars from the Blizzard faithful, the news that Google’s DeepMind branch—which is dedicated to developing sophisticated Intelligence—would be teaming up with the makers of Starcraft 2 to further its research on AI elicited more of a murmur. Continue reading Google DeepMind could invent the next generation of AI by playing Starcraft 2

Google’s New AI Gets Smarter Thanks to a Working Memory

by Shelly Fan, SingularityHub

“The behavior of the computer at any moment is determined by the symbols which he is observing and his ‘state of mind’ at that moment.” – Alan Turing

Artificial intelligence has a memory problem.

Back in early 2015, Google’s mysterious DeepMind unveiled an algorithm that could teach itself to play Atari games. Based on deep neural nets, the AI impressively mastered nostalgic favorites such as Space Invaders and Pong without needing any explicit programming — it simply learned through millions of examples. Continue reading Google’s New AI Gets Smarter Thanks to a Working Memory

Google DeepMind AI’s Ability To Discern Physical Objects Is Mere Child’s Play

by Rob Williams, HotHardWare

Google’s DeepMind has been working on some truly incredible things over the past couple of years. Just last week, we learned that DeepMind would be teaching itself how to play StarCraft II, which wouldn’t be the first time it had a gaming focus. Before Google acquired DeepMind a couple of years ago, its AI was used to learn and conquer Atari games, and more recently, it taught itself how to beat an expert at Go. Continue reading Google DeepMind AI’s Ability To Discern Physical Objects Is Mere Child’s Play

AI accountability needs action now, say UK MPs

by Natasha Lomas, TechCrunch

A UK parliamentary committee has urged the government to act proactively — and to act now — to tackle “a host of social, ethical and legal questions” arising from growing usage of autonomous technologies such as artificial intelligence. Continue reading AI accountability needs action now, say UK MPs

Google AI invents its own cryptographic algorithm; no one knows how it works

by Sebastian Anthony, ARS Technica UK

Neural networks seem good at devising crypto methods; less good at code breaking.

Google Brain has created two artificial intelligence’s that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch. Continue reading Google AI invents its own cryptographic algorithm; no one knows how it works

Google DeepMind’s AI learns to play with physical objects

by Timothy Revell, New Scientist

Push it, pull it, break it, maybe even give it a lick. Children experiment this way to learn about the physical world from an early age. Now, artificial intelligence trained by researchers at Google’s DeepMind and the University of California, Berkeley, is taking its own baby steps in this area.

“Many aspects of the world, like ‘Can I sit on this?’ or ‘Is it squishy?’ are best understood through experimentation,” says DeepMind’s Misha Denil. In a paper currently under review, Denil and his colleagues have trained an AI to learn about the physical properties of objects by interacting with them in two different virtual environments. Continue reading Google DeepMind’s AI learns to play with physical objects

Machines Can Now Recognize Something After Seeing It Once

by Will Knight, MIT Technology Review

Algorithms usually need thousands of examples to learn something. Researchers at Google DeepMind found a way around that.

Most of us can recognize an object after seeing it once or twice. But the algorithms that power computer vision and voice recognition need thousands of examples to become familiar with each new image or word.

Researchers at Google DeepMind now have a way around this. They made a few clever tweaks to a deep-learning algorithm that allows it to recognize objects in images and other things from a single example—something known as “one-shot learning.” The team demonstrated the trick on a large database of tagged images, as well as on handwriting and language. Continue reading Machines Can Now Recognize Something After Seeing It Once