are a powerful approach to machine learning, allowing computers to understand images, recognize speech, translate sentences, play Go, and much more. As much as we’re using neural in our at Google, there’s more to learn about how these systems accomplish these feats. For example, neural networks can learn how to recognize images far more accurately than any program we directly write, but we don’t really know how exactly they decide whether a dog in a picture is a Retriever, a Beagle, or a German Shepherd.

We’ve been working for several years to better grasp how neural networks operate. Last week we shared new research on how these techniques come together to give us a deeper of why networks make the decisions they do—but first, let’s take a step back to explain how we got here.

Neural networks consist of a series of “layers,” and their understanding of an image evolves over the course of multiple layers. In 2015, we started a project called DeepDream to get a sense of what neural networks “see” at the different layers. It led to a much larger research project that would not only develop beautiful art, but also shed light on the inner of neural networks.

Source link
thanks you RSS link


Please enter your comment!
Please enter your name here