In the mid-1980s, British-born computer scientist and psychologist Geoffrey Hinton and others helped revive resarch interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required major human intervention: programmers had to label data before feeding it to the network, and complex speech or image recognition required more computer power than was available.
During the first decade of the 21st century Hinton and colleagues at the University of Toronto made some fundamental conceptual breakthroughs that have led to advances in unsupervised learning procedures for neural networks with rich sensory input.
"In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects" (Robert D. Hof, "Deep Learning," MIY Technology Review, April 23, 2013, accessed 11-10-2014).
Hinton, G. E.; Osindero, S.; Teh, Y., "A fast learning algorithm for deep belief nets", Neural Computation 18 #7 (2006) 1527–1554.