On August 8, 2014 scientists from IBM and Cornell University, including Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, and Dharmendra S. Modha, reported in the journal Science the first production-scale neuromorphic computing chip—a significant landmark in the development of cognitive computing. The chip, named TrueNorth, attempted to mimic the way brains recognize patterns, relying on densely interconnected webs of transistors similar to neural networks in the brain. It employed an efficient, scalable, and flexible non–von Neumann architecture. Von Neumann architecture, in which memory and processing were separated, and information flowed back and forth between the two components, remained the standard computer architecture from the design of the earliest electronic computers to 2014, so the new neuromorphic chip design represented a radical departure.
"The chip contains 5.4 billion transistors, yet draws just 70 milliwatts of power. By contrast, modern Intel processors in today’s personal computers and data centers may have 1.4 billion transistors and consume far more power — 35 to 140 watts.
"Today’s conventional microprocessors and graphics processors are capable of performing billions of mathematical operations a second, yet the new chip system clock makes its calculations barely a thousand times a second. But because of the vast number of circuits working in parallel, it is still capable of performing 46 billion operations a second per watt of energy consumed, according to IBM researchers.
"The TrueNorth has one million 'neurons,' about as complex as the brain of a bee.
“ 'It is a remarkable achievement in terms of scalability and low power consumption,' said Horst Simon, deputy director of the Lawrence Berkeley National Laboratory.
"He compared the new design to the advent of parallel supercomputers in the 1980s, which he recalled was like moving from a two-lane road to a superhighway.
"The new approach to design, referred to variously as neuromorphic or cognitive computing, is still in its infancy, and the IBM chips are not yet commercially available. Yet the design has touched off a vigorous debate over the best approach to speeding up the neural networks increasingly used in computing.
"The idea that neural networks might be useful in processing information occurred to engineers in the 1940s, before the invention of modern computers. Only recently, as computing has grown enormously in memory capacity and processing speed, have they proved to be powerful computing tools" (John Markoff, "IBM Designs a New Chip that Functions Like A Brain," The New York Times, August 7, 2014).
Merolla et al, "A million spiking-neuron integrated circuit with a scalable communication network and interface," Science 345 no. 6197 (August 8, 2014) 668-673.
"Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts" (Abstract).