Google DeepMind’s AI learns to play with physical objects

Google DeepMind’s AI learns to play with physical objects

by Timothy Revell, New Scientist

Push it, pull it, break it, maybe even give it a lick. Children experiment this way to learn about the physical world from an early age. Now, artificial intelligence trained by researchers at Google’s DeepMind and the University of California, Berkeley, is taking its own baby steps in this area.

“Many aspects of the world, like ‘Can I sit on this?’ or ‘Is it squishy?’ are best understood through experimentation,” says DeepMind’s Misha Denil. In a paper currently under review, Denil and his colleagues have trained an AI to learn about the physical properties of objects by interacting with them in two different virtual environments.

In the first, the AI was faced with five blocks that were the same size but had a randomly assigned mass that changed each time the experiment was run. The AI was rewarded if it correctly identified the heaviest block but given negative feedback if it was wrong. By repeating the experiment, the AI worked out that the only way to determine the heaviest block was to interact with all of them before making a choice.

The second experiment also featured up to five blocks, but this time they were arranged in a tower. Some of the blocks were stuck together to make one larger block, while others were not. The AI had to work out how many distinct blocks there were, again receiving a reward or negative feedback depending on its answer. Over time, the AI learned it had to interact with the tower – essentially pulling it apart – to determine the correct answer.

It’s not the first time AI has been given blocks to play with. Earlier this year, Facebook used simulations of stacked blocks to teach neural networks how to predict if a tower would fall over or not.

Read the full article here…

How did you like this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.


Leave a Reply