by Carlos E. Perez, Medium
If you’ve been following my articles up to now, you’ll begin to perceive, what’s apparent to many advanced practitioners of Deep Learning (DL), is the emergence of Game Theoretic concepts in the design of newer architectures.
This makes intuitive sense for two reasons. The first intuition is that DL systems will eventually need to tackle situations with imperfect knowledge. In fact we’ve already seen this in DeepMind’s AlphaGo that uses partial knowledge to tactically and strategically best the world-best human in the game of Go. Continue reading Game Theory reveals the Future of Deep Learning
by Tia Ghose, Live Science
Spookily powerful artificial intelligence (AI) systems may work so well because their structure exploits the fundamental laws of the universe, new research suggests.
The new findings may help answer a longstanding mystery about a class of artificial intelligence that employ a strategy called deep learning. These deep learning or deep neural network programs, as they’re called, are algorithms that have many layers in which lower-level calculations feed into higher ones. Deep neural networks often perform astonishingly well at solving problems as complex as beating the world’s best player of the strategy board game Go or classifying cat photos, yet know one fully understood why. Continue reading The Spooky Secret Behind Artificial Intelligence’s Incredible Power
by Cade Metz, WIRED
On the west coast of Australia, Amanda Hodgson is launching drones out towards the Indian Ocean so that they can photograph the water from above. The photos are a way of locating dugongs, or sea cows, in the bay near Perth—part of an effort to prevent the extinction of these endangered marine mammals. The trouble is that Hodgson and her team don’t have the time needed to examine all those aerial photos. There are too many of them—about 45,000—and spotting the dugongs is far too difficult for the untrained eye. So she’s giving the job to a deep neural network. Continue reading 2016: The Year That Deep Learning Took Over the Internet
Image-processing system learns largely on its own, much like a human baby
Neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine have taken inspiration from the human brain in creating a new “deep learning” method that enables computers to learn about the visual world largely on their own, much as human babies do. Continue reading Research team sets new mark for ‘deep learning’
by Emerging Technology from the arXiv, MIT Technology Review
Can you tell the difference between music composed by Bach and by a neural network?
Johann Sebastian Bach is widely considered one of the great composers of baroque music. Bach lived and worked in Germany during the 18th century and is revered for the beauty of his compositions and his technical mastery of harmony and counterpoint. Continue reading Deep-Learning Machine Listens to Bach, Then Writes Its Own Music in the Same Style
by Mr. Sunil Patel, AI eHive
Big companies like Google, Facebook, Intel, IBM, etc. are investing huge on Artificial Intelligence and Machine Learning. Deep Learning (DL) is a specialized type of machine learning. Deep Learning is about learning multiple levels of representation and abstraction that help to make sense of data such as images, sound, and text. Continue reading Deep Learning : What, Why and Applications
by Nick Cowen, ARS Technica
How Google’s AI research team has teamed up with Blizzard to further deep learning in AI
The announcement at BlizzCon 2016 that met with the most muted response was arguably the most revolutionary.
While new content for the likes of Hearthstone, Heroes of the Storm, Overwatch, and Diablo III drew appreciative roars from the Blizzard faithful, the news that Google’s DeepMind branch—which is dedicated to developing sophisticated Intelligence—would be teaming up with the makers of Starcraft 2 to further its research on AI elicited more of a murmur. Continue reading Google DeepMind could invent the next generation of AI by playing Starcraft 2
by Toshiba Corporation, Phys.Org
Toshiba Corporation continues to build on its commitment to promoting the Internet of Things and Big Data analysis with development of a Time Domain Neural Network (TDNN) that sues an extremely low power consumption neuromorphic semiconductor circuit to perform processing for Deep Learning. TDNN is composed of a massive number of tiny processing units that use Toshiba’s original analog technique, unlike conventional digital processors. TDNN was reported on November 8 at A-SSCC 2016 (Asian Solid-State Circuits Conference 2016), an IEEE-sponsored international conference on semiconductor circuit technology held in Japan. Continue reading Toshiba advances deep learning with extremely low power neuromorphic processor