A Google research team has trained a network of 1,000 computers wired up like a brain to recognize cats.
The team built a neural network, which mimics the working of a biological brain, that worked out how to spot pictures of cats in just three days.
The cat-spotting computer was created as part of a larger project to investigate machine learning.
Google is planning to use the learning system to help with its indexing systems and with language translation.
The computer system was put together by Google staff scientists from its X Labs division working with Prof Andrew Ng, head of the artificial intelligence lab at Stanford University, California.
The work of the team stands at odds with many image-recognition techniques, which depend on telling a computer to look for specific features of a target object before any are presented to it.
By contrast, the Google machine knew nothing about the images it was to see. However, its 16,000 processing cores ran software that simulated the workings of a biological neural network with about one billion connections.
In a similar way nerves in brains are heavily interconnected and it is believed that "recognition" involves the triggering of a specific pathway through that thicket of connections.
Pathways for particular objects, people or other stimuli are thought to be built up as organisms learn about the world. Some neuroscientists speculate that parts of the human visual system become so specialised they recognise very specific subjects such as a person's grandmother or their cat.
As millions of images were analysed by Google's network of silicon nerves, some parts of it started to react to specific elements in those pictures.
After three days and 10 million images the network could spot a cat, even though it had never been told what one looked like.
Despite their success the researchers were reluctant to speculate how closely it resembled biology. For instance, they said in an interview with the New York Times, their computer system might push the limits of current work on neural networks but it was dwarfed by the complexity of the human visual processing system.
The positive results, wrote the researchers, were a surprise and ran counter to the intuition that learning could not take place when so little context and guidance was given.
As well as spotting cats, the computer system also learned how to pick out the shape of the human body and to recognise human faces.
The work is now going beyond the lab and Google is looking into ways to use it in its main search business either to help categorise what is found online or to aid language translation and speech recognition.
The team is presenting a paper on its findings at the International Conference on Machine Learning that is being held in Edinburgh from 25 June to 1 July.
The team built a neural network, which mimics the working of a biological brain, that worked out how to spot pictures of cats in just three days.
The cat-spotting computer was created as part of a larger project to investigate machine learning.
Google is planning to use the learning system to help with its indexing systems and with language translation.
The computer system was put together by Google staff scientists from its X Labs division working with Prof Andrew Ng, head of the artificial intelligence lab at Stanford University, California.
The work of the team stands at odds with many image-recognition techniques, which depend on telling a computer to look for specific features of a target object before any are presented to it.
By contrast, the Google machine knew nothing about the images it was to see. However, its 16,000 processing cores ran software that simulated the workings of a biological neural network with about one billion connections.
In a similar way nerves in brains are heavily interconnected and it is believed that "recognition" involves the triggering of a specific pathway through that thicket of connections.
Pathways for particular objects, people or other stimuli are thought to be built up as organisms learn about the world. Some neuroscientists speculate that parts of the human visual system become so specialised they recognise very specific subjects such as a person's grandmother or their cat.
As millions of images were analysed by Google's network of silicon nerves, some parts of it started to react to specific elements in those pictures.
After three days and 10 million images the network could spot a cat, even though it had never been told what one looked like.
Despite their success the researchers were reluctant to speculate how closely it resembled biology. For instance, they said in an interview with the New York Times, their computer system might push the limits of current work on neural networks but it was dwarfed by the complexity of the human visual processing system.
The positive results, wrote the researchers, were a surprise and ran counter to the intuition that learning could not take place when so little context and guidance was given.
As well as spotting cats, the computer system also learned how to pick out the shape of the human body and to recognise human faces.
The work is now going beyond the lab and Google is looking into ways to use it in its main search business either to help categorise what is found online or to aid language translation and speech recognition.
The team is presenting a paper on its findings at the International Conference on Machine Learning that is being held in Edinburgh from 25 June to 1 July.
0 comments:
Post a Comment