4406 entries. 94 themes. Last updated December 26, 2016.

Computers & the Human Brain / Neuromorphic Computing Timeline

Theme

1800 – 1850

The Most Famous Image in the Early History of Computing 1839

Portrait of Jacquard woven in silk on a Jacquard loom.

In 1839 weaver Michel-Marie Carquillat, working for the firm of Didier, Petit et Cie, in Lyon, France wove in fine silk a Portrait of Joseph-Marie Jacquard, The image, including caption and Carquillat’s name, taking credit for the weaving, measures 55 x 34 cm.; the full piece of silk including blank margins measures 85 x 66 cm.

This image, of which perhaps only about 20 examples survived, was woven on a Jacquard loom using 24,000 Jacquard cards, each of which had over 1000 hole positions. The process of mis en carte, or converting the image details to punched cards for the Jacquard mechanism, for this exceptionally large and detailed image, would have taken several workers many months, as the woven image convincingly portrays superfine elements such as a translucent curtain over glass window panes.

The Jacquard loom did no computation, and for that reason it was not a digital device in the way we think of digital today. However the method by which Jacquard stored information in punched cards by either punching a hole in s standardized space in a card or not punching a whole in that space is analogous to a zero or one or an on and off switch. It was also an important conceptual step in the history of computing because the Jacquard method of storing information in punched cards, and weaving a pattern by following the series of instructions recorded in a train of punched cards, was used by Charles Babbage in his plans for data and program input, and data output and storage in his general purpose programmable computer, the Analytical EngineOffsite Link. Trains of Jacquard cards were programs in the modern sense of computer programs, though the word "program" did not have that meaning until after the development of electronic computers after World War II.

Once all the “programming” was completed, the process of weaving the image with its 24,000 punched cards would have taken more than eight hours, assuming that the weaver was working at the usual Jacquard loom speed of about forty-eight picks per minute, or about 2800 per hour. More than once this woven image was mistaken for an engraved image. The image was produced only to order, most likely in an exceptionally small number of examples. In 2012 the only publically recorded examples were those in the Metropolitan Museum of Art, the Science Museum, London, The Art Institute of Chicago, and the Computer History Museum, Mountain View, California. The image was the subject of the book by James Essinger entitled, Jacquard's Web. How a Hand Loom led to the Birth of the Information Age (2004).

To Charles Babbage the incredible sophistication of the information processing involved in the mis en carte — what we call programming— of this exceptionally elaborate and beautiful image confirmed the potential of using punched cards for the input, programming, output and storage of information in his design and conception of the first general-purpose programmable computer—the Analytical Engine. The highly aesthetic result also confirmed to Babbage that machines were capable of amazingly complex and subtle processes—processes which might eventually emulate the subtlety of the human mind.

“In June 1836 Babbage opted for punched cards to control the machine [the Analytical Engine]. The principle was openly borrowed from the Jacquard loom, which used a string of punched cards to automatically control the pattern of a weave. In the loom, rods were linked to wire hooks, each of which could lift one of the longitudinal threads strung between the frame. The rods were gathered in a rectangular bundle, and the cards were pressed one at a time against the rod ends. If a hole coincided with a rod, the rod passed through the card and no action was taken. If no hole was present then the card pressed back the rod to activate a hook which lifted the associated thread, allowing the shuttle which carried the cross-thread to pass underneath. The cards were strung together with wire, ribbon or tape hinges, and fan-folded into large stacks to form long sequences. The looms were often massive and the loom operator sat inside the frame, sequencing through the cards one at a time by means of a foot pedal or hand lever. The arrangement of holes on the cards determined the pattern of the weave.

“As well as patterned textiles for ordinary use, the technique was used to produce elaborate and complex images as exhibition pieces. One well-known piece was a shaded portrait of Jacquard seated at table with a small model of his loom. The portrait was woven in fine silk by a firm in Lyon using a Jacquard punched-card loom. . . . Babbage was much taken with the portrait, which is so fine that it is difficult to tell with the naked eye that it is woven rather than engraved. He hung his own copy of the prized portrait in his drawing room and used it to explain his use of the punched cards in his Engine. The delicate shading, crafted shadows and fine resolution of the Jacquard portrait challenged existing notions that machines were incapable of subtlety. Gradations of shading were surely a matter of artistic taste rather than the province of machinery, and the portrait blurred the clear lines between industrial production and the arts. Just as the completed section of the Difference Engine played its role in reconciling science and religion through Babbage’s theory of miracles, the portrait played its part in inviting acceptance for the products of industry in a culture in which aesthetics was regarded as the rightful domain of manual craft and art” (Swade, The Cogwheel Brain. Charles Babbage and the Quest to Build the First Computer [2000] 107-8).

(This entry was last revised on 02-28-2016.)

View Map + Bookmark Entry

1850 – 1875

Alfred Smee Speculates About a Logic Machine that Might Occupy a Space Larger than London 1851

In his book, The Process of Thought Adapted to Words and Language published in 1851 English surgeon and writer Alfred Smee suggested the possibility of information storage and retrieval by a mechanical logical machine operating analogously to the human mind. This was an attempt to produce an artificial system of reasoning based upon neurological principles, which were then primarily a matter of speculation. The problem was that Smee's hypothetical “electro-biological” machine, built out of mechanical parts, which he conceived in generality, but had no way of engineering, or building even in part, might have occupied a space larger than London.

View Map + Bookmark Entry

Geroge Parker Bidder, One of the Most Remarkable Human Computers 1856

In 1856 George Parker Bidder, an engineer and one of the most remarkable human computers of all time, published his paper on Mental Calculation. (See Reading 3.1)

View Map + Bookmark Entry

1910 – 1920

Torres y Quevedo Invents the First Decision-Making Automaton 1912 – 1915

In 1912 Spanish civil engineer and mathematician, and Director of the Laboratory of Applied Mechanics at the Ateneo Científico, Literario y Artístico de MadridLeonardo Torres y Quevedo built the first decision-making automaton — a chess-playing machine that pit the machine’s rook and king against the king of a human opponent.  Torres's machine, which he called El Ajedrecista (The Chessplayer) used electromagnets under the board to "play" the endgame rook and king against the lone king.

"Well, not precisely play. But the machine could, in a totally unassisted and automated fashion, deliver mate with King and Rook against King. This was possible regardless of the initial position of the pieces on the board. For the sake of simplicity, the algorithm used to calculate the positions didn't always deliver mate in the minimum amount of moves possible, but it did mate the opponent flawlessly every time. The machine, dubbed El Ajedrecista (Spanish for “the chessplayer”), was built in 1912 and made its public debut during the Paris World Fair of 1914, creating great excitement at the time. It used a mechanical arm to make its moves and electrical sensors to detect its opponent's replies." (http://www.chessbase.com/newsprint.asp?newsid=1799, accessed 10-31-2012).

The implications of Torres's machines were not lost on all observers. On November 6, 1915 Scientific American magazine in their Supplement 2079 pp. 296-298 published an illustrated article entitled "Torres and his Remarkable Automatic Devices. He Would Substitute Machinery for the Human Mind."

View Map + Bookmark Entry

1940 – 1950

McCulloch & Pitts Publish the First Mathematical Model of a Neural Network 1943

In 1943 American neurophysiologist and cybernetician of the University of Illinois at Chicago Warren McCulloch and self-taught logician and cognitive psychologist Walter Pitts published “A Logical Calculus of the ideas Imminent in Nervous Activity,” describing the McCulloch - Pitts neuron, the first mathematical model of a neural network.

Building on ideas in Alan Turing’s “On Computable Numbers”, McCulloch and Pitts's paper provided a way to describe brain functions in abstract terms, and showed that simple elements connected in a neural network can have immense computational power. The paper received little attention until its ideas were applied by John von Neumann, Norbert Wiener, and others.

View Map + Bookmark Entry

Von Neumann Privately Circulates the First Theoretical Description of a Stored-Program Computer June 30, 1945

On June 30, 1945 mathematician and physicist John von Neumann of Princeton privately circulated copies of his First Draft on a Report on the EDVAC to twenty-four people connected with the EDVAC project. This document, written between February and June 1945, provided the first theoretical description of the basic details of a stored-program computer—what later became known as the Von Neumann architecture.

To avoid the government's security classification, and to avoid engineering problems that might detract from the logical considerations under discussion, von Neumann avoided mentioning specific hardware. Influenced by Alan Turing and by Warren McCulloch and Walter Pitts, von Neumann patterned the machine to some degree after human thought processes.

In November 2013 the text of von Neumann's report was available at this link

View Map + Bookmark Entry

The Macy Conferences on Cybernetics Occur 1946 – 1953

At the initiative of Warren McCulloch, from 1946 to 1954 the Macy Conferences occurred in New York to set the foundations for a general science of the workings of the human mind. They resulted in breakthroughs in systems theory, cybernetics, and what eventually became known as cognitive science.

View Map + Bookmark Entry

One of the First Studies of Pattern Recognition 1947

In 1947 American logician Walter Pitts and psychiatrist neuroscientist Warren S. McCulloch published "How we Know Universals. The Perception of Auditory and Visual Forms," Bulletin of Mathematical Biophysics 9 (1947) 127-147. In this expansion of McCulloch and Pitts' "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943) Pitts and McCulloch implemented their notions by showing how the anatomy of the cerebral cortex might accommodate the identification of form independent of its angular size in the image, and other such operations in perception.

View Map + Bookmark Entry

Norbert Wiener Issues "Cybernetics", the First Widely Distributed Book on Electronic Computing 1948

"Use the word 'cybernetics', Norbert, because nobody knows what it means. This will always put you at an advantage in arguments."

— Widely quoted: attributed to Claude Shannon in a letter to Norbert Wiener in the 1940s.

 In 1948 mathematician Norbert Wiener at MIT published Cybernetics or Control and Communication in the Animal and the Machine, a widely circulated and influential book that applied theories of information and communication to both biological systems and machines. Computer-related words with the “cyber” prefix, including "cyberspace," originate from Wiener’s book. Cybernetics was also the first conventionally published book to discuss electronic digital computing. Writing as a mathematician rather than an engineer, Wiener’s discussion was theoretical rather than specific. Strangely the first edition of the book was published in English in Paris at the press of Hermann et Cie. The first American edition was printed offset from the French sheets and issued by John Wiley in New York, also in 1948. I have never seen an edition printed or published in England. 

Independently of Claude Shannon, Wiener conceived of communications engineering as a brand of statistical physics and applied this viewpoint to the concept of information. Wiener's chapter on "Time series, information, and communication" contained the first publication of Wiener's formula describing the probability density of continuous information. This was remarkably close to Shannon's formula dealing with discrete time published in A Mathematical Theory of Communication (1948). Cybernetics also contained a chapter on "Computing machines and the nervous system." This was a theoretical discussion, influenced by McCulloch and Pitts, of differences and similarities between information processing in the electronic computer and the human brain. It contained a discussion of the difference between human memory and the different computer memories then available. Tacked on at the end of Cybernetics were speculations by Wiener about building a chess-playing computer, predating Shannon's first paper on the topic.

Cybernetics is a peculiar, rambling blend of popular and highly technical writing, ranging from history to philosophy, to mathematics, to information and communication theory, to computer science, and to biology. Reflecting the amazingly wide range of the author's interests, it represented an interdisciplinary approach to information systems both in biology and machines. It influenced a generation of scientists working in a wide range of disciplines. In it were the roots of various elements of computer science, which by the mid-1950s had broken off from cybernetics to form their own specialties. Among these separate disciplines were information theory, computer learning, and artificial intelligence.

It is probable that Wiley had Hermann et Cie supervise the typesetting because they specialized in books on mathematics.  Hermann printed the first edition by letterpress; the American edition was printed offset from the French sheets. Perhaps because the typesetting was done in France Wiener did not have the opportunity to read proofs carefully, as the first edition contained many typographical errors which were repeated in the American edition, and which remained uncorrected through the various printings of the American edition until a second edition was finally published by John Wiley and MIT Press in 1961. 

Though the book contained a lot of technical mathematics, and was not written for a popular audience, the first American edition went through at least 5 printings during 1948,  and several later printings, most of which were probably not read in their entirety by purchasers. Sales of Wiener's book were helped by reviews in wide circulation journals such as the review in TIME Magazine on December 27, 1948, entitled "In Man's Image." The reviewer used the word calculator to describe the machines; at this time the word computer was reserved for humans.

"Some modern calculators 'remember' by means of electrical impulses circulating for long periods around closed circuits. One kind of human memory is believed to depend on a similar system: groups of neurons connected in rings. The memory impulses go round & round and are called upon when needed. Some calculators use 'scanning' as in television. So does the brain. In place of the beam of electrons which scans a television tube, many physiologists believe, the brain has 'alpha waves': electrical surges, ten per second, which question the circulating memories.

"By copying the human brain, says Professor Wiener, man is learning how to build better calculating machines. And the more he learns about calculators, the better he understands the brain. The cyberneticists are like explorers pushing into a new country and finding that nature, by constructing the human brain, pioneered there before them.

"Psychotic Calculators. If calculators are like human brains, do they ever go insane? Indeed they do, says Professor Wiener. Certain forms of insanity in the brain are believed to be caused by circulating memories which have got out of hand. Memory impulses (of worry or fear) go round & round, refusing to be suppressed. They invade other neuron circuits and eventually occupy so much nerve tissue that the brain, absorbed in its worry, can think of nothing else.

"The more complicated calculating machines, says Professor Wiener, do this too. An electrical impulse, instead of going to its proper destination and quieting down dutifully, starts circulating lawlessly. It invades distant parts of the mechanism and sets the whole mass of electronic neurons moving in wild oscillations" (http://www.time.com/time/magazine/article/0,9171,886484-2,00.html, accessed 03-05-2009).

Presumably the commercial success of Cybernetics encouraged Wiley to publish Berkeley's Giant Brains, or Machines that Think in 1949.

♦ In October 2012 I offered for sale the copy of the first American printing of Cybernetics that Wiener inscribed to Jerry Wiesner, the head of the laboratory at MIT where Wiener conducted his research. This was the first inscribed copy of the first edition (either the French or American first) that I had ever seen on the market, though the occasional signed copy of the American edition did turn up. Having read our catalogue description of that item, my colleague Arthur Freeman emailed me this story pertinent to Wiener's habit of not inscribing books:

"Norbert, whom I grew up nearby (he visited our converted barn in Belmont, Mass., constantly to play frantic theoretical blackboard math with my father, an economist/statistician at MIT, which my mother, herself a bit better at pure math, would have to explain to him later), was a notorious cheapskate. His wife once persuaded him to invite some colleagues out for a beer at the Oxford Grill in Harvard Square, which he did, and after a fifteen-minute sipping session, he got up to go, and solemnly collected one dime each from each of his guests. So when *Cybernetics* appeared on the shelves of the Harvard Coop Bookstore, my father was surprised and flattered that Norbert wanted him to have an inscribed copy, and together they went to Coop, where Norbert duly picked one out, wrote in it, and carried it to the check-out counter--where he ceremoniously handed it over to my father to pay for. This was a great topic of family folklore. I wonder if Jerry Wiesner paid for his copy too?"

View Map + Bookmark Entry

Comparing the Functions of Genes to Self-Reproducing Automata September 20, 1948

At the Hixon Symposium in Pasadena, California on September 20, 1948 John von Neumann spoke on The General and Logical Theory of Automata. Within this speech von Neumann compared the functions of genes to self-reproducing automata. This was the first of a series of five works (some posthumous) in which von Neumann attempted to develop a precise mathematical theory allowing comparison of computers and the human brain.

“For instance, it is quite clear that the instruction I is roughly effecting the functions of a gene. It is also clear that the copying mechanism B performs the fundamental act of reproduction, the duplication of the genetic material, which is clearly the fundamental operation in the multiplication of living cells. It is also easy to see how arbitrary alterations of the system E, and in particular of I, can exhibit certain typical traits which appear in connection with mutation, which is lethality as a rule, but with a possibility of continuing reproduction with a modification of traits.” (pp. 30-31).

Molecular biologist Sydney Brenner read this brief discussion of the gene within the context of information in the proceedings of the Hixon Symposium, published in 1951. Later he wrote about in his autobiography:

“The brilliant part of this paper in the Hixon Symposium is his description of what it takes to make a self-reproducing machine. Von Neumann shows that you have to have a mechanism not only of copying the machine, but of copying the information that specifies the machine. So he divided the machine--the automaton as he called it--into three components; the functional part of the automaton, a decoding section which actually takes a tape, reads the instructions and builds the automaton; and a device that takes a copy of this tape and inserts it into the new automaton. . . . I think that because of the cultural differences between most biologists on the one hand, and physicists and mathematicians on the other, it had absolutely no impact at all. Of course I wasn’t smart enough to really see then that this is what DNA and the genetic code was all about. And it is one of the ironies of this entire field that were you to write a history of ideas in the whole of DNA, simply from the documented information as it exists in the literature--that is, a kind of Hegelian history of ideas--you would certainly say that Watson and Crick depended upon von Neumann, because von Neumann essentially tells you how it’s done. But of course no one knew anything about the other. It’s a great paradox to me that in fact this connection was not seen” (Brenner, My Life, 33-36).

View Map + Bookmark Entry

A Neurosurgeon Discusses the Differences between Computers and the Human Brain June 9, 1949

On June 9, 1949 Sir Geoffrey Jefferson, a neurological surgeon at Manchester, England, delivered a speech entitled The Mind of Mechanical Man in which he discussed the differences between computers and the human brain. (See Reading 11.1).

View Map + Bookmark Entry

1950 – 1960

"Can Man Build a Superman?" January 23, 1950

The cover by Boris Artzybasheff on the January 23, 1950 issue of TIME Magazine depicted the Harvard Mark III partly electronic and partly electromechanical computer as a Naval officer in Artzybasheff's "bizarrely anthropomorphic" style. The caption under the image read, "Mark III. Can Man Build a Superman?" The cover story of the magazine was entitled "The Thinking Machine."

The Mark III, delivered to U.S. Naval Proving Ground at the US Navy base at Dahlgren, Virginia in March 1950, operated at 250 times the speed of the Harvard Mark I (1944). 

Among its interesting elements,  the Time article included an early use of the word computer for machines rather than people. The review of Wiener's Cybernetics published in TIME in December 1948, referred to the machines as calculators.

"What Is Thinking? Do computers think? Some experts say yes, some say no. Both sides are vehement; but all agree that the answer to the question depends on what you mean by thinking.

"The human brain, some computermen explain, thinks by judging present information in the light of past experience. That is roughly what the machines do. They consider figures fed into them (just as information is fed to the human brain by the senses), and measure the figures against information that is "remembered." The machine-radicals ask: 'Isn't this thinking?'

"Their opponents retort that computers are mere tools that do only what they are told. Professor [Howard] Aiken, a leader of the conservatives, admits that the machines show, in rudimentary form at least, all the attributes of human thinking except one: imagination. Aiken cannot define imagination, but he is sure that it exists and that no machine, however clever, is likely to have any."

"Nearly all the computermen are worried about the effect the machines will have on society. But most of them are not so pessimistic as [Norbert] Wiener. Professor Aiken thinks that computers will take over intellectual drudgery as power-driven tools took over spading and reaping. Already the telephone people are installing machines of the computer type that watch the operations of dial exchanges and tot up the bills of subscribers.

"Psychotic Robots. In the larger, "biological" sense, there is room for nervous speculation. Some philosophical worriers suggest that the computers, growing superhumanly intelligent in more & more ways, will develop wills, desires and unpleasant foibles' of their own, as did the famous robots in Capek's R.U.R.

"Professor Wiener says that some computers are already "human" enough to suffer from typical psychiatric troubles. Unruly memories, he says, sometimes spread through a machine as fears and fixations spread through a psychotic human brain. Such psychoses may be cured, says Wiener, by rest (shutting down the machine), by electric shock treatment (increasing the voltage in the tubes), or by lobotomy (disconnecting part of the machine).

"Some practical computermen scoff at such picturesque talk, but others recall odd behavior in their own machines. Robert Seeber of I.B.M. says that his big computer has a very human foible: it hates to wake up in the morning. The operators turn it on, the tubes light up and reach a proper temperature, but the machine is not really awake. A problem sent through its sleepy wits does not get far. Red lights flash, indicating that the machine has made an error. The patient operators try the problem again. This time the machine thinks a little more clearly. At last, after several tries, it is fully awake and willing to think straight.

"Neurotic Exchange. Bell Laboratories' Dr. [Claude] Shannon has a similar story. During World War II, he says, one of the Manhattan dial exchanges (very similar to computers) was overloaded with work. It began to behave queerly, acting with an irrationality that disturbed the company. Flocks of engineers, sent to treat the patient, could find nothing organically wrong. After the war was over, the work load decreased. The ailing exchange recovered and is now entirely normal. Its trouble had been 'functional': like other hard-driven war workers, it had suffered a nervous breakdown" (quotations from http://www.time.com/time/magazine/article/0,9171,858601-7,00.html, accessed 03-05-2009).

View Map + Bookmark Entry

The Paris symposium, "Les Machines á calculer et la pensée humaine," Occurs January 8 – January 13, 1951

From January 8-13, 1951 the Paris symposium, Les Machines á calculer et la pensée humaine (Calculating Machines and Human Thought) took place at l'Institut Blaise Pascal. Unlike the other early computer conferences, no demonstration of a stored-program electronic computer occurred. Louis Couffignal demonstrated the prototype of his non-stored-program machine.

Hook & Norman, Origins of Cyberspace (2002) no. 526.

View Map + Bookmark Entry

Possibly the First Artificial Self-Learning Machine January 1952

In January 1952 Marvin Minsky, a graduate student at Harvard University Psychological Laboratories implemented the SNARC (Stochastic Neural Analog Reinforcement Calculator). This randomly connected network of Hebb synapses was the first connectionist neural network learning machine that when "rewarded" facilitated recently-used pathways. The SNARC, implemented using vacuum tubes, was possibly the first artificial self-learning machine.

Minsky, A Neural-Analogue Calculator Based upon a Probability Model of Reinforcement," Harvard University Psychological Laboratories, Cambridge, Massachusetts, January 8, 1952.  This reference came from Minsky's bibliography of his selected publications on his website in December 2013. He did not include this in his bibliography on AI in Computers and Thought (1963), leading me to believe that some or all of the information may have been included in his Princeton Ph.D. dissertation, Neural Nets and the Brain Model Problem (1954). That was also unpublished.

View Map + Bookmark Entry

To What Extent Can Human Mental Processes be Duplicated by Switching Circuits? February 1953

In 1953 Bell Laboratories engineer John Meszar published "Switching Systems as Mechanical Brains," Bell Laboratories Record XXXI (1953) 63-69.

This paper, written in the earliest days of automatic switching systems, when few electronic computers existed, and, for the most part, human telephone operators served as "highly intelligent and versatile switching systems," raised the question of whether certain aspects of human thought are computable and others are not. Meszar argued for "the necessity of divorcing certain mental operations from the concept of thinking," in order to "pave the way for ready acceptance of the viewpoint that automatic systems can accomplish many of the functions of the human brain." 

"We are faced with a basic dilemma; we are forced either to admit the possibility of mechanized thinking, or to restrict increasingly our concept of thinking. However, as is apparent from this article, many of us do not find it hard to make the choice. The choice is to reject the possibility of mechanized thinking but to admit readily the necessity for an orderly declassification of many areas of mental effort from the high level of thinking. Machines will take over such areas, whether we like it or not.

"This declassification of wide areas of mental effort should not dismay any one of us. It is not an important gain for those who are sure that even as machines have displaced muscles, they will also take over the functions of the 'brain.' Neither is it a real loss for those who feel that there is something hallowed about all functions of the human mind. What we are giving up to the machines— some of us gladly, others reluctantly— are the uninteresting flat lands of routine mental chores, tasks that have to be performed according to rigorous rules. The areas we are holding unchallenged are the dominating heights of creative mental effort, which comprise the ability to speculate, to invent, to imagine, to philosophize, the dream better ways for tomorrow than exist today. These are the mental activities for which rigorous rules cannot be formulated— they constitute real thinking, whose mechanization most of us cannot conceive" (p. 69).

View Map + Bookmark Entry

"The Design of Machines to Simulate the Behavior of the Human Brain" March 1955 – December 1956

At the 1955 Institute of Radio Engineers (IRE) Convention held in New York in March the Professional Group on Electronic Computers (PGEC) sponsored a symposium on "The Design of Machines to Simulate the Behavior of the Human Brain." The four panel members were Warren McCulloch of MIT, Anthony G. Oettinger of Harvard, Otto H. Schmitt of the University of Minnesota, and Nathaniel Rochester of IBM. The moderator was Howard E. Tompkins, then of Burroughs Corporation.

After the panel members read prepared statements, and a brief discussion, a group of invited questioners cross-examined the panel members. The invited questioners were Marvin Minsky, then of Harvard, Morris Rubinoff of the University of Pennsylvania, Elliot L. Gruenberg of the W. L. Maxson Corporation, John Mauchly, of what was then Remington Rand, M. E. Maron of IBM, and Walter Pitts of MIT. The transcript of the symposium was edited by the speakers with the help of Howard Tompkins, and published in the IRE Transactions on Electronic Computers, December 1956, 240-255.

From the transcript of the symposium, which was available online when I wrote this entry in April 2014, we see that many of the issues of current interest in 2014 were being discussed in 1955-56. McCulloch began the symposium with the following very quotable statement:

"Since nature has given us the working model, we need not ask, theoretically, whether machines can be built to do what brains can do with information. But it will be a long time before we can match this three-pint, three-pound, twenty-five-watt computer, with its memory storing 10¹³ or 10 [to the 15th power] bits with a mean half-life of half a day and successful regeneration of 5 per cent of its traces for sixty years, operating continuously wih its 10 [to the 10th power] dynamically stable and unreplaceable relays to preserve itself by governing its own activity and stabilizing the state of the whole body and its relation to its world by reflexive and appetitive negative feedback."

As I read through this discussion, I concluded that it was perhaps the best summary of ideas on the computer and the human brain in 1955-1956. As quoting it in its entirety would have been totally impractical, I instead listed the section headings and refer those interested to the original text:

McCulloch: "Brain," A Computer With Negative Feedback

Oettinger: Contrasts and Similarities

Rochester: Simulation of Brain Action on Computers

Schmitt: The Brain as a Different Computer

Discussion:

Chemical Action, Too

Cell Assemblies

Why Build a Machine "Brain"?

Is Systematicness Undesirable?

Growth as a Type of Learning

What Does Simultation Prove?

The Semantics of Reproduction

Where is the Memory?

"Distributed Memories"

"Memory Half-Life"

Analog vs. Digital

Speed vs. Equipment

The Neurophysiologists' Contribution

Pattern Recognition

Creative Thinking by Machines?

What Model Do We Want?

View Map + Bookmark Entry

Intelligence Amplification by Machines 1956

In 1956 English psychiatrist and cybernetician W[illiam] Ross Ashby wrote of intelligence amplification by machines in his book, An Introduction to Cybernetics.

View Map + Bookmark Entry

Chomsky's Hierarchy of Syntactic Forms September 1956

In September 1956 American linguist, philosopher, cognitive scientist, and activist Noam Chomsky published "Three Models for the Description of Language" in IRE Transactions on Information Theory IT-2, 113-24. In the paper Chomsky introduced two key concepts, the first being “Chomsky’s hierarchy” of syntactic forms, which was widely applied in the construction of artificial computer languages.

“The Chomsky hierarchy places regular (or linear) languages as a subset of the context-free languages, which in turn are embedded within the set of context-sensitive languages also finally residing in the set of unrestricted or recursively enumerable languages. By defining syntax as the set of rules that define the spatial relationships between the symbols of a language, various levels of language can be also described as one-dimensional (regular or linear), two-dimensional (context-free), three-dimensional (context sensitive) and multi-dimensional (unrestricted) relationships. From these beginnings, Chomsky might well be described as the ‘father of formal languages’ ” (Lee, Computer Pioneers [1995] 164). 

The second concept Chomsky presented here was his transformational-generative grammar theory, which attempted to define rules that can generate the infinite number of grammatical (well-formed) sentences possible in a language, and seeks to identify rules (transformations) that govern relations between parts of a sentence, on the assumption that beneath such aspects as word order a fundamental deep structure exists. As Chomsky expressed it in his abstract of the present paper,

"We investigate several conceptions of linguistic structure to determine whether or not they can provide simple and “revealing” grammars that generate all of the sentences of English and only these. We find that no finite-state Markov process [a random process whose future probabilities are determined by its most recent values] that produces symbols with transition from state to state can serve as an English grammar. We formalize the notion of “phrase structure” and show that this gives us a method for describing language which is essentially more powerful. We study the properties of a set of grammatical transformations, showing that the grammar of English is materially simplified if phrase-structure is limited to a kernel of simple sentences from which all other sentences are constructed by repeated transformation, and that this view of linguistic structure gives a certain insight into the use and understanding of language" (p. 113).

Minsky, "A Selected Descriptor-Indexed Bibliography to the Literature on Artificial Intelligence" in Feigenbaum & Feldman eds., Computers and Thought (1963) 453-523, no. 484. Hook & Norman, Origins of Cyberspace (2002) no. 531.

View Map + Bookmark Entry

Von Neumann's "The Computer and the Brain" 1958

Because of failing health, John von Neumann did not finish his last book, The Computer and the BrainThe book, issued posthumously in 1958, was a published version of the Silliman Lectures which von Neumann was invited to deliver at Yale in 1956. Although von Neumann prepared the lectures by March 1956, he was already too sick to travel to New Haven and could not deliver the lectures as scheduled. He continued to work on the manuscript until his death on February 8, 1957. The manuscript remained unfinished, as his widow Klara von Neumann explained in her preface to the posthumous edition. 

Von Neumann's 82 page essay was divided into two parts. The first part discussed the computer: its procedures, control mechanisms, and other characteristics. The second part focused on the brain, systematically comparing the operations of the brain with what was then state-of-the-art in computer science. In what seems to have been the groundwork for a third part—but it was not organized as a separate part—von Neumann drew some conclusions from the comparison with respect to the role of code and language. Von Neumann wrote that "A deeper mathematical study of the nervous system may alter our understanding of mathematics and logic."

View Map + Bookmark Entry

The Perceptron November 1958 – 1960

In November 1958 Frank Rosenblatt invented the Perceptron, or Mark I, at Cornell University. Completed in 1960, this was the first computer that could learn new skills by trial and error, using a type of neural network that simulated human thought processes.

View Map + Bookmark Entry

Human Versus Machine Intelligence and Communication (1959) 1959

"Somewhat the same problem arises in communicating with a machine entity that would arise in communicating with a person of an entirely different language background than your own. A system of logical definition and translation would have to be available. In order that meanings should not be lost, such a system of translation would also need to be precise. We are all familiar with the unhappy results of language translations which are either lacking in precision or where suitable words of equivalent meaning cannot be found. Likewise, translating into a machine language cannot be anything but an exact operation. Machines even more than people must be addressed with clarity and unambiguity, for machines cannot improvise on their own or imagine that about which they have not been specifically informed, as a human might do within reasonable limits of error. . . .

"We must now ascertain how concepts are formulated within the framework of computer language. For analogy, let us first consider the manner in which instructions are usually given to a non-mechanical entity. When we instruct, for example, a human being, we are aided by the fact that the human is usually able to fill in gaps in our instructions through acumen acquired from his own past experiences. It is seldom necessary that instructions be either detailed or literal, although we may have lost sight of this fact.

"The computer in a correlate example is a mechanical 'being' which must be instructed at each and every step. But it can be given a very long list of instructions upon which it can be expected to subsequently act with great speed and accuracy and with untiring repetition. Machine traits are: low comprehension, high retention, extreme reliability, and tremendous speed. The use of superlatives here to describe these traits is not exaggerative. Since speed becomes in practice the equivalent of number, the machine might be, and has sometimes been, equated to legions — an army, if you will — of lowgrade morons whose conceptualization is entirely literal, who remember as long as is necessary or as you desire them to, whose loyalty and subservience is complete, who require no holidays, no spurious incentives, no morale programs, pensions, not even gratitude for past service, and who seemingly never tire of doing elementary repetitive tasks such as typing, accounting, bookkeeping, arithmetic, filling in forms, and the like. In about all these respects the machine may be seen to be the exact opposite of nature's loftiest creature, the intellligent human being, who becomes bored with the petty and repetitious, who is unreliable, who wanders from the task for the most trivial reasons, who gets out of humor, who forgets, who requires constant incentives and rewards, who improvises on his own even when to do so is impertinent to the objectives being undertaken, and who in summary (let's face it) is unsuitable to most forms of industry as the latter are ideally and practically conceived in our times. It becomes apparent in retrospect that the only excuse we might ever have had for employing him to do many of civilization's more literal and repetitious tasks was the absence of something more efficient with which to replace him!

"It is not the purpose of this volume to explore further the ramifications of the above statements of fact. . . ."(Nett & Hetzler, An Introduction to Electronic Data Processing [1959] 86-88).

View Map + Bookmark Entry

One of the First Computer Models of How People Learn 1959 – 1961

For his 1960 Ph.D thesis at Carnegie Institute of Technology (Carnegie Mellon University) carried out under the supervision of Herbert A. Simon, computer scientist Edward Feigenbaum developed EPAM (Elementary Perceiver and Memorizer), a computer program designed to model elementary human symbolic learning. Feigenbaum's thesis first appeared as An Information Processing Theory of Verbal Learning, RAND Corporation Mathematics Dvisision Report P-1817, October 9, 1959. In December 2013 a digital facsimile of Feigenbaum's personal corrected copy of the thesis was available from Stanford University's online archive of Feigenbaum papers at this link.

Feigenbaum's first publication on EPAM may have been "The Simulation of Verbal Learning Behavior," Proceedings of the Western Joint Computer Conference.... May 9-11, 1961 (1961) 121-32. In December 2013 a digital facsimile of this was also available at the same link.

Hook & Norman, Origins of Cyberspace (2002) no. 598.

View Map + Bookmark Entry

The Inspiration for Artificial Neural Networks, Building Blocks of Deep Learning 1959

In 1959 Harvard neurophysiologists David H. Hubel and Torsten Wiesel, inserted a microelectrode into the primary visual cortex of an anesthetized cat. They then projected patterns of light and dark on a screen in front of the cat, and found that some neurons fired rapidly when presented with lines at one angle, while others responded best to another angle. They called these neurons "simple cells." Still other neurons, which they termed "complex cells," responded best to lines of a certain angle moving in one direction. These studies showed how the visual system builds an image from simple stimuli into more complex representations. Many artificial neural networks, fundamental components of deep learning, may be viewed as cascading models of cell types inspired by Hubel and Wiesel's observations.

For two later contributions Hubel and Wiesel shared the 1981 Nobel Prize in Physiologist or Medicine with Roger W. Sperry.

". . . firstly, their work on development of the visual system, which involved a description of ocular dominance columns in the 1960s and 1970s; and secondly, their work establishing a foundation for visual neurophysiology, describing how signals from the eye are processed by the brain to generate edge detectors, motion detectors, stereoscopic depth detectors and color detectors, building blocks of the visual scene. By depriving kittens from using one eye, they showed that columns in the primary visual cortex receiving inputs from the other eye took over the areas that would normally receive input from the deprived eye. This has important implications for the understanding of deprivation amblyopia, a type of visual loss due to unilateral visual deprivation during the so-called critical period. These kittens also did not develop areas receiving input from both eyes, a feature needed for binocular vision. Hubel and Wiesel's experiments showed that the ocular dominance develops irreversibly early in childhood development. These studies opened the door for the understanding and treatment of childhood  cataracts  and strabismus. They were also important in the study of cortical plasticity.

"Furthermore, the understanding of sensory processing in animals served as inspiration for the SIFT descriptor (Lowe, 1999), which is a local feature used in computer vision for tasks such as object recognition and wide-baseline matching, etc. The SIFT descriptor is arguably the most widely used feature type for these tasks" (Wikipedia article on David H. Hubel, accessed 11-10-2014). 

View Map + Bookmark Entry

1960 – 1970

Douglas Engelbart Issues "Augmenting Human Intellect: A Conceptual Framework" October 1962

In October 1962 Douglas Engelbart of the Stanford Research Institute, Menlo Park, California, completed his report, Augmenting Human Intellect: A Conceptual Framework, for the Director of Information Sciences, Air Force Office of Scientific Research. This report led J. C. R. Licklider of DARPA to fund SRI's Augmentation Research Center.

View Map + Bookmark Entry

Ted Nelson Coins the Terms Hypertext, Hypermedia, and Hyperlink 1965

In 1965 self-styled "systems humanist" Ted Nelson (Theodor Holm Nelson) published "Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate," ACM '65 Proceedings of the 1965 20th national conference, 84-100In this paper Nelson coined the terms hypertext and hypermedia to refer to features of a computerized information system. He used the word "link" to refer the logical connections that came to be associated with the word "hyperlink."  

Nelson is also credited with inventing the word hyperlink, though its published origin is less specific:

"The term "hyperlink" was coined in 1965 (or possibly 1964) by Ted Nelson and his assistant Calvin Curtin at the start of Project Xanadu. Nelson had been inspired by "As We May Think", a popular essay by Vannevar Bush. In the essay, Bush described a microfilm-based machine (the Memex) in which one could link any two pages of information into a "trail" of related information, and then scroll back and forth among pages in a trail as if they were on a single microfilm reel. The closest contemporary analogy would be to build a list of bookmarks to topically related Web pages and then allow the user to scroll forward and backward through the list.

In a series of books and articles published from 1964 through 1980, Nelson transposed Bush's concept of automated cross-referencing into the computer context, made it applicable to specific text strings rather than whole pages, generalized it from a local desk-sized machine to a theoretical worldwide computer network, and advocated the creation of such a network. Meanwhile, working independently, a team led by Douglas Engelbart (with Jeff Rulifson as chief programmer) was the first to implement the hyperlink concept for scrolling within a single document (1966), and soon after for connecting between paragraphs within separate documents (1968)" (Wikipedia article on Hyperlink, accessed 08-29-2010). 

Wardrip-Fruin and Montfort, The New Media Reader (2003) 133-45.

View Map + Bookmark Entry

Irving John Good Originates the Concept of the Technological Singularity 1965

In 1965 British mathematician Irving John Good, originally named Isidore Jacob Gudak, published "Speculations Concerning the First Ultraintelligent Machine," Advances in Computers, vol. 6 (1965) 31ff. This paper, published while Good held research positions at Trinity College, Oxford and at Atlas Computer Laboratory, originated the concept later known as "technological singularity," which anticipates the eventual existence of superhuman intelligence:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." 

Stanley Kubrick consulted Good regarding aspects of computing and artificial intelligence when filming 2001: A Space Odyssey (1968), one of whose principal characters was the paranoid HAL 9000 supercomputer.

View Map + Bookmark Entry

1970 – 1980

The Brain-Computer Interface 1973

In 1973 computer scientist Jacques J. Vidal of UCLA coined the term brain-computer interface (BCI) in his paper "Toward Direct Brain-Computer Communication," Annual Review of Biophysics and Bioengineering 2: 157–80. doi:10.1146/annurev.bb.02.060173.001105. PMID 4583653.

View Map + Bookmark Entry

The Neocognitron, Perhaps the Earliest Multilayered Artificial Neural Network 1979

The Neocognitron, a hierarchical multilayered artificial neural network which acquires the ability to recognize visual patterns through learning, may be one of the earliest examples of what was later called "deep learning." It was invented in 1979 by Kunihiko Fukushima while at NHK Science & Technical Research Laboratories (STRL, NHK放送技術研究所, NHK Hōsō Gijutsu Kenkyūjo), headquartered in Setagaya, Tokyo.  The Neocognitron was used for handwritten character recognition and other pattern recognition tasks.

"The extension of the neocognitron is still continuing. By the introduction of top-down connections and new learning methods, various kinds of neural networks have been developed. When two or more patterns are presented simultaneously, the "Selective Attention Model " can segment and recognize individual patterns in tern by switching its attention. Even if a pattern is partially occluded by other objects, we human beings can often recognize the occluded pattern. An extended neocognitron can now have such human-like ability and can, not only recognize occluded patterns, but also restore them by completing occluded contours" (http://personalpage.flsi.or.jp/fukushima/index-e.html.  accessed 11-10-2014).

K. Fukushima,"Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position," Biological Cybernetics, 36 (1980) 93-202.

View Map + Bookmark Entry

1980 – 1990

The First Book on Neuromorphic Computing 1984

In 1984 professor of electrical engineering and computer science at Caltech Carver Mead published Analog VLSI and Neural SystemsThis was first book on neuromorphic engineering or neuromorphic computing—a concept developed by Mead, that involves 

"... the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times the term neuromorphic has been used to describe analog, digital, and mixed-mode analog/digital VLSI and software systems that implement models of neural systems (for perceptionmotor control, or multisensory integration).

"A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change, " Wikipedia article on Neuromorphic engineering, accessed 01-01-2014.)

View Map + Bookmark Entry

George A. Miller Begins WordNet, a Lexical Database 1985

In 1985 psychologist and cognitive scientist George A. Miller and his team at Princeton began development of WordNet, a lexical database for the English language.

WordNet

"groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. The purpose is twofold: to produce a combination of dictionary and thesaurus that is more intuitively usable, and to support automatic text analysis and artificial intelligence applications" (Wikipedia article on WordNet).

You can browse Wordnet at http://wordnet.princeton.edu/.

WordNet has been used for a number of different purposes in information systems, including word sense disambiguation, information retrieval, automatic text classification, automatic text summarization, and even automatic crossword puzzle generation.

View Map + Bookmark Entry

The First Analog Silicon Retina 1988

With his student Misha Mahowald, computer scientist Carver Mead at Caltech described the first analog silicon retina in "A Silicon Model of Early Visual Processing," Neural Networks 1 (1988) 91−97. The silicon retina used analog electrical circuits to mimic the biological functions of rod cellscone cells, and other non-photoreceptive cells in the retina of the eye. It was the first example of using continuously-operating floating gate (FG) programming/erasing techniques— in this case UV light— as the backbone of an adaptive circuit technology. The invention was not only potentially useful as a device for restoring sight to the blind, but it was also one of the most eclectic feats of electrical and biological engineering of the time.

"The approach to silicon models of certain neural computations expressed in this chip, and its successors, foreshadowed a totally new class of physically based computations inspired by the neural paradigm. More recent results demonstrated that a wide range of visual and auditory computations of enormous complexity can be carried out in minimal area and with minute energy dissipation compared with digital implementations" (http://www.cns.caltech.edu/people/faculty/mead/carver-contributions.pdf, accessed 12-23-2013).

In 1992 Mahowald received her Ph.D. under Mead at Caltech with her thesis, VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function. 

View Map + Bookmark Entry

1990 – 2000

Development of Neural Networks 1993

In 1993 Psychologist, neuroscientist and cognitive scientist James A. Anderson of Brown University, Providence, RI, published "The BSB Model: A simple non-linear autoassociative network," M. Hassoun (Ed), Associative Neural Memories: Theory and Implementation (1993).  Anderson's neural networks were applied to models of human concept formation, decision making, speech perception, and models of vision.

Anderson, J. A., Spoehr, K. T. and Bennett, D.J.  "A study in numerical perversity: Teaching arithmetic to a neural network,"  D.S. Levine and M. Aparicio (Eds.) Neural Networks for Knowledge Representation and Inference, (1994).

View Map + Bookmark Entry

The Singularity January 1993

Mathematician, computer scientist and science fiction writer Vernor Vinge called the creation of the first ultraintelligent machine the Singularity in the January 1993 Omni magazine. Vinge's follow-up paper entitled "What is the Singularity?" presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center( now NASA John H. Glenn Research Center at Lewis Field) and the Ohio Aerospace Institute, March 30-31, 1993, and  slightly changed in the Winter 1993 issue of Whole Earth Review, contained the oft-quoted statement,

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."

"Vinge refines his estimate of the time scales involved, adding, 'I'll be surprised if this event occurs before 2005 or after 2030.

"Vinge continues by predicting that superhuman intelligences, however created, will be able to enhance their own minds faster than the humans that created them. 'When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid.' This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period of time" (Wikipedia article on Technological singularity, accessed 05-24-2009).

View Map + Bookmark Entry

The First Defeat of a Human Champion by a Computer in a Game Compeition 1994

At the Second Man-Machine World Championship in 1994, Chinook, a computer checkers program developed around 1989 at the University of Alberta by a team led by Jonathan Schaeffer, won due to human frailty. This was the first time that a computer program defeated a human champion in a game competition.

 "In 1996 the Guinness Book of World Records recognized Chinook as the first program to win a human world championship" (http://webdocs.cs.ualberta.ca/~chinook/project/, accessed 01-24-2010).

View Map + Bookmark Entry

How Much Information is There? 1997

In 1997 Michael Lesk of Rutgers University attempted to calculate "How Much Information is There in the World?" He included information on how much information a human brain may be able to retain.

View Map + Bookmark Entry

Kasparov Loses to Deep Blue: The First Time a Human Chess Player Loses to a Computer Under Tournament Conditions May 11, 1997

On May 11, 1997 Gary Kasparov, sometimes regarded as the greatest chess player of all time, resigned 19 moves into Game 6 against Deep Blue, an IBM RS/6000 SP supercomputer capable of calculating 200 million chess positions per second. This was the first time that a human world chess champion lost to a computer under tournament conditions.

The event, which took place at the Equitable Center in New York, was broadcast live from IBM's website via a Java viewer, and became the world's record "Net event" at the time.

"Since the emergence of artificial intelligence and the first computers in the late 1940s, computer scientists compared the performance of these 'giant brains' with human minds, and gravitated to chess as a way of testing the calculating abilities of computers. The game is a collection of challenging problems for minds and machines, but has simple rules, and so is perfect for such experiments.

"Over the years, many computers took on many chess masters, and the computers lost.

"IBM computer scientists had been interested in chess computing since the early 1950s. In 1985, a graduate student at Carnegie Mellon University, Feng-hsiung Hsu, began working on his dissertation project: a chess playing machine he called ChipTest. A classmate of his, Murray Campbell, worked on the project, too, and in 1989, both were hired to work at IBM Research. There, they continued their work with the help of other computer scientists, including Joe Hoane, Jerry Brody and C. J. Tan. The team named the project Deep Blue. The human chess champion won in 1996 against an earlier version of Deep Blue; the 1997 match was billed as a 'rematch.'

"The champion and computer met at the Equitable Center in New York, with cameras running, press in attendance and millions watching the outcome. The odds of Deep Blue winning were not certain, but the science was solid. The IBMers knew their machine could explore up to 200 million possible chess positions per second. The chess grandmaster won the first game, Deep Blue took the next one, and the two players drew the three following games. Game 6 ended the match with a crushing defeat of the champion by Deep Blue." 

"The AI crowd, too, was pleased with the result and the attention, but dismayed by the fact that Deep Blue was hardly what their predecessors had imagined decades earlier when they dreamed of creating a machine to defeat the world chess champion. Instead of a computer that thought and played chess like a human, with human creativity and intuition, they got one that played like a machine, systematically evaluating 200 million possible moves on the chess board per second and winning with brute number-crunching force. As Igor Aleksander, a British AI and neural networks pioneer, explained in his 2000 book, How to Build a Mind:  

" 'By the mid-1990s the number of people with some experience of using computers was many orders of magnitude greater than in the 1960s. In the Kasparov defeat they recognized that here was a great triumph for programmers, but not one that may compete with the human intelligence that helps us to lead our lives.'

"It was an impressive achievement, of course, and a human achievement by the members of the IBM team, but Deep Blue was only intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better" (Gary Kasparov, "The Chess Master and the Computer," The New York Review of Books, 57, February 11, 2010).

View Map + Bookmark Entry

Using Neural Networks for Word Sense Disambiguation 1998

In 1998 cognitive scientist / entrepreneur Jeffrey Stibel, physicist, psychologist, neural scientist James A. Anderson, and others from the Department of Cognitive and Linguistic Sciences at Brown University created a word sense disambiguator using George A. Miller's WordNet lexical database.

Stibel and others applied this technology in Simpli, "an early search engine that offered disambiguation to search terms. A user could enter in a search term that was ambiguous (e.g., Java) and the search engine would return a list of alternatives (coffee, programming language, island in the South Seas)."

"The technology was rooted in brain science and built by academics to model the way in which the mind stored and utilized language."

"Simpli was sold in 2000 to NetZero. Another company that leveraged the Simpli WordNet technology was purchased by Google and they continue to use the technology for search and advertising under the brand Google AdSense.

"In 2001, there was a buyout of the company and it was merged with another company called Search123. Most of the original members joined the new company. The company was later sold in 2004 to ValueClick, which continues to use the technology and search engine to this day" (Wikipedia article on Simpli, accessed 05-10-2009).

View Map + Bookmark Entry

2000 – 2005

On the Value of the History of Science in Scientific Research 2000

Jean-Pierre Dupuy

"Although the history of science and ideas is not my field, I could not  imagine adopting Alfred North Whitehead's opinion that every science, in order to avoid stagnation, must forget its founders. To the contrary, it seems to me that the ignorance displayed by most scientists with regard to the history of their discipline, far from being a source of dynamism, acts as a brake on their creativity. To assign the history of science a role separate from that of research itself therefore seems to me mistaken. Science, like philosophy, needs to look back over its past from time to time, to inquire into its origins and to take a fresh look at models, ideas, and paths of  investigation that had previously been explored but then for one reason or another were abandoned, great though the promise was. Many examples could be cited that confirm the usefulness of consulting history and, conversely, the wasted opportunities to which a neglect of history often leads. Thus we have witnessed in recent years, in the form of the theory of deterministic chaos, the rediscovery of Poincaré's dazzling intuitions and early results concerning nonlinear dynamics; the retum to macroscopic physics, and the study of fluid dynamics and disordered systems, when previously only the infinitely small and the infinitely large had seemed worthy of the attention of physicists; the revival of interest in embryology, ethology, and ecology, casting off the leaden cloak that molecular biology had placed over the study of living things; the renewed appreciation of Keynes's profound insights into the role of individual and collective expectations in market regulation, buried for almost fifty years by the tide of vidgar Keynesianism; and, last but not least, since it is one of the main themes of this book, the rediscovery by cognitive science of the cybernetic model devised by McCulloch and Pitts, known now by the name of 'neoconnectionism' or 'neural networks,' after several decades of domination by the cognitivist model' " (Dupuy, The Mechanization of the Mind: On the Origins of Cognitive Science, trans. M. B. DeBevoise [2000], p. x.)

View Map + Bookmark Entry

A Model of Cortical Processing as an Electronic Circuit of 16 "Neurons" that Could Select and Amplify Input Signals Much Like the Cortex of the Mammalian Brain 2000

In 2000 a research team from the Institute of Neuroinformatics ETHZ/UNI Zurich; Bell Laboratories, Murray Hill, NJ; and the Department of Brain and Cognitive Sciences & Department of Electrical Engineering and Computer Science at MIT created an electrical circuit of 16 "neurons" that could select and amplify input signals much like the cortex of the mammalian brain.

"Digital circuits such as the flip-flop use feedback to achieve multi-stability and nonlinearity tor estore signals to logical levels, for example 0 and 1. Analogue feedback circuits are generally designed to operate linearly, so that signals are over a range, and the response is unique. By contrast, the response of cortical circuits to sensory stimulation can be both multistable and graded. We propose that the neocortex combines digital selection of an active set of neurons with analogue response by dynamically varying the postive feedback inherent in its recurrent connections. Strong postive feedback causes differential instabilities that drive the selection of a set of active neurons under the constraints embedded in the synaptic weights. Once selected, the active neurons generate weaker, stable feedback that provides analogue amplication of the input. Here we present our model of cortical processing as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex" (Abstract)

R. Hahnloser, R. Sarpeshkar, M. Mahowald, R.J. Douglas and S. Seung: "Digital selection and analog amplification co-exist in an electronic circuit inspired by neocortex", Nature 405 (2000) 947-951. 

View Map + Bookmark Entry

The Film: "A. I. Artificial Intelligence" 2001

Steven Spielberg

The movie poster for A.I. Artificial Intelligence

Stanley Kubrick

In 2001 American director, screen writer and film producer Steven Spielberg directed, co-authored and produced, through DreamWorks and Amblin Entertainment, the science fiction film A.I. Artificial Intelligence, telling the story of David, an android robot child programmed with the ability to love and to dream. The film explored the hopes and fears involved with efforts to simulate human thought processes, and the social consequences of creating robots that may be better than people at specialized tasks.

The film was a 1970s project of Stanley Kubrick, who eventually turned it over to Spielberg. The project languished in development hell for nearly three decades before technology advanced sufficiently for a successful production. The film required enormously complex puppetry, computer graphics, and make-up prosthetics, which are well-described and explained in the supplementary material in the two-disc special edition of the film issued on DVD in 2002.

View Map + Bookmark Entry

"Minority Report": The Movie 2002

Steven Spielberg

The movie poster for Minority Report

The cover art for Minority Report by Philip Dick

Philip Dick

Steven Spielberg directed the science fiction 2002 film Minority Report, loosely based on the short story, "The Minority Report" by Philip K. Dick.

"It is set primarily in Washington, D.C. and Northern Virginia in the year 2054, where "Precrime", a specialized police department, apprehends criminals based on foreknowledge provided by three psychics called 'precogs'. The cast includes Tom Cruise as Precrime officer John Anderton, Colin Farrell as Department of Justice agent Danny Witwer, Samantha Morton as the senior precog Agatha, and Max von Sydow as Anderton's superior Lamar Burgess. The film has a distinctive look, featuring desaturated colors that make it almost resemble a black-and-white film, yet the blacks and shadows have a high contrast, resembling film noir."

"Some of the technologies depicted in the film were later developed in the real world – for example, multi-touch interfaces are similar to the glove-controlled interface used by Anderton. Conversely, while arguing against the lack of physical contact in touch screen phones, PC Magazine's Sascha Segan argued in February 2009, 'This is one of the reasons why we don't yet have the famous Minority Report information interface. In that movie, Tom Cruise donned special gloves to interact with an awesome PC interface where you literally grab windows and toss them around the screen. But that interface is impractical without the proper feedback—without actually being able to feel where the edges of the windows are' " (Wikipedia article on Minority Report [film] accessed 05-25-2009).

The two-disc special edition of the film issued on DVD in 2002 contained excellent supplementary material on the special digital effects.

View Map + Bookmark Entry

Cortical Rewiring and Information Storage October 14, 2004

A microscopic picture of a cluster of neurons in the brain

"Current thinking about long-term memory in the cortex is focused on changes in the strengths of connections between neurons. But ongoing structural plasticity in the adult brain, including synapse formation/elimination and remodelling of axons and dendrites, suggests that memory could also depend on learning-induced changes in the cortical ‘wiring diagram’. Given that the cortex is sparsely connected, wiring plasticity could provide a substantial boost in storage capacity, although at a cost of more elaborate biological machinery and slower learning."

"The human brain consists of 10 to the 11th power neurons connected by 10 to the 15 power synapses. This awesome network has a remarkable capacity to translate experiences into vast numbers of memories, some of which can last an entire lifetime. These long-term memories survive surgical anaesthesia and epileptic episodes, and thus must involve modifications of neural circuits, most likely at synapses" (Chklovskii, Mel & K. Svoboda, "Cortical Rewiring and Information Storage," Nature, Vol. 431, 782-88).

View Map + Bookmark Entry

2005 – 2010

"From Gutenberg to the Internet" 2005

In 2005 the author/editor of this database, Jeremy Norman, issued From Gutenberg to the Internet: A Sourcebook on the History of Information Technology.

This printed book was the first anthology of original publications, reflecting the origins of the various technologies that converged to form the Internet. Each reading is introduced by the editor.

View Map + Bookmark Entry

"Brain Age: Train Your Brain in Minutes a Day!," the First Commercial NeuroGame May 19, 2005 – April 16, 2006

On May 19, 2005 Nintendo, headquartered in Kyoto, Japan, released Brain Age: Train Your Brain in Minutes a Day for the Nintendo DS dual-screen handheld gaming console in Japan. The game, which was also known as Dr. Kawashima's Brain Training: How Old is Your Brain, was released for the Nintendo DS in the United States on April 16, 2006. Though loosely based based on research by Japanese neuroscientist  Ryuta Kawashima, Nintendo made no claims for the scientific validation of the game. Brain Age may be considered the earliest commercial NeuroGame.

"Brain Age features a variety of puzzles, including stroop testsmathematical questions, and Sudoku puzzles, all designed to help keep certain parts of the brain active. It was included in the Touch! Generations series of video games, a series which features games for a more casual gaming audience. Brain Age uses the touch screen and microphone for many puzzles. It has received both commercial and critical success, selling 19.00 million copies worldwide (as of March 31, 2013) and has received multiple awards for its quality and innovation. There has been controversy over the game's scientific effectiveness" (Wikipedia article on Brain Age: Train Your Brain in Minutes a Day, accessed 07-31-2014).

View Map + Bookmark Entry

Connectomes: Elements of Connections Forming the Human Brain September 30, 2005

Olaf Sporns

Giulio Tononi

Neuroscientists Olaf Sporns of Indiana University, Giulio Tononi of the University of Wisconsin, and Rolf Köttler of Heinrich Heine University, Düsseldorf, Germany, published "The Human Connectome: A Structural Description of the Human Brain," PLoS Computational Biology I (4). This paper and the PhD thesis of Patric Hagmann from the Université de Lausanne, From diffusion MRI to brain connectomics, coined the term connectome:

In their 2005 paper  Sporns et al. wrote:

"To understand the functioning of a network, one must know its elements and their interconnections. The purpose of this article is to discuss research strategies aimed at a comprehensive structural description of the network of elements and connections forming the human brain. We propose to call this dataset the human 'connectome,' and we argue that it is fundamentally important in cognitive neuroscience and neuropsychology. The connectome will significantly increase our understanding of how functional brain states emerge from their underlying structural substrate, and will provide new mechanistic insights into how brain function is affected if this structural substrate is disrupted."

In his 2005 Ph.D. thesis, From diffusion MRI to brain connectomics, Hagmann wrote:

"It is clear that, like the genome, which is much more than just a juxtaposition of genes, the set of all neuronal connections in the brain is much more than the sum of their individual components. The genome is an entity it-self, as it is from the subtle gene interaction that [life] emerges. In a similar manner, one could consider the brain connectome, set of all neuronal connections, as one single entity, thus emphasizing the fact that the huge brain neuronal communication capacity and computational power critically relies on this subtle and incredibly complex connectivity architecture" (Wikipedia article on Connectome, accessed 12-28-2010).

View Map + Bookmark Entry

A More Efficient Way to Teach Individual Layers of Neurons for Deep Learning 2006

In the mid-1980s, British-born computer scientist and psychologist Geoffrey Hinton and others helped revive resarch interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required major human intervention: programmers had to label data before feeding it to the network, and complex speech or image recognition required more computer power than was available.

During the first decade of the 21st century Hinton and colleagues at the University of Toronto made some fundamental conceptual breakthroughs that have led to advances in unsupervised learning procedures for neural networks with rich sensory input.

"In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects" (Robert D. Hof, "Deep Learning," MIY Technology Review, April 23, 2013, accessed 11-10-2014).

Hinton, G. E.; Osindero, S.; Teh, Y., "A fast learning algorithm for deep belief nets", Neural Computation 18 #7 (2006) 1527–1554.

View Map + Bookmark Entry

Brainbow: A Colorful Technique to Visualize Brain Circuitry November 2007

Jeff W. Lichtman

Joshua R. Sanes

Three brainbows of mouse neurons from Lichtman and Sanes, 2008.

A. A motor nerve innervating ear muscle

B. An axon tract in the brain stem

C. The hippocampul dentate gyrus

In November 2007 Jeff W. Lichtman and Joshua R. Sanes, both professors of Molecular & Cellular Biology in the Department of Neurobiology at Harvard Medical School, and colleagues, published "Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system," Nature 450 (7166): 56–62. doi:10.1038/nature06293. This described the visualization process they called "Brainbow."

"Detailed analysis of neuronal network architecture requires the development of new methods. Here we present strategies to visualize synaptic circuits by genetically labelling neurons with multiple, distinct colours. In Brainbow transgenes, Cre/lox recombination is used to create a stochastic choice of expression between three or more fluorescent proteins (XFPs). Integration of tandem Brainbow copies in transgenic mice yielded combinatorial XFP expression, and thus many colours, thereby providing a way to distinguish adjacent neurons and visualize other cellular interactions. As a demonstration, we reconstructed hundreds of neighbouring axons and multiple synaptic contacts in one small volume of a cerebellar lobe exhibiting approximately 90 colours. The expression in some lines also allowed us to map glial territories and follow glial cells and neurons over time in vivo. The ability of the Brainbow system to label uniquely many individual cells within a population may facilitate the analysis of neuronal circuitry on a large scale." (From the Nature abstract).

View Map + Bookmark Entry

The SyNAPSE Neuromorphic Machine Technology Project Begins 2008

Traditional stored-program von Neumann computers are constrained by physical limits, and require humans to program how computers interact with their environments. In contrast the human brain processes information autonomously, and learns from its environment. Neuromorphic electronic machines— computers that function more like a brain— may enable autonomous computational solutions for real-world problems with many complex variables. In 2008 DARPA awarded the first funding to HRL Laboratories, Hewlett-Packard and IBM Research for SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics)—an attempt to build a new kind of cognitive computer with form, function and architecture similar to the mammalian brain. The program sought to create electronic systems inspired by the human brain that could understand, adapt and respond to information in ways fundamentally different from traditional computers.

"The initial phase of the SyNAPSE program developed nanometer scale electronic synaptic components capable of adapting the connection strength between two neurons in a manner analogous to that seen in biological systems (Hebbian learning), and simulated the utility of these synaptic components in core microcircuits that support the overall system architecture" (Wikipedia article on SyNAPSE, accessed 10-20-2013).

View Map + Bookmark Entry

"Computers vs. Brains" April 1, 2009

According to the article referenced below, the entire archived content of the Internet occupied three petabytes (3 x 1000 terabytes) in April 2009. 

It is thought that one human brain may store roughly one petabyte. Though there may be some similarity in storage capacity between the quantity of information on the Internet and information stored in the human brain, quantity is the main point of similarity, since the information is stored and processed in totally different ways by people and computers.

Sandra Aamodt and Sam Wang, "Guest Column: Computers vs. Brains," New York Times Blogs, 03-31-2009.

View Map + Bookmark Entry

The Human Connectome Project July 2009

The Human Connectome Project, a five-year project sponsored by sixteen components of the National Institutes of Health (NIH) split between two consortia of research institutions, was launched as the first of three Grand Challenges of the National Institutes of Health's Blueprint for Neuroscience Research

The project was described as "an ambitious effort to map the neural pathways that underlie human brain function. The overarching purpose of the Project is to acquire and share data about the structural and functional connectivity of the human brain. It will greatly advance the capabilities for imaging and analyzing brain connections, resulting in improved sensitivity, resolution, and utility, thereby accelerating progress in the emerging field of human connectomics. Altogether, the Human Connectome Project will lead to major advances in our understanding of what makes us uniquely human and will set the stage for future studies of abnormal brain circuits in many neurological and psychiatric disorders" (http://www.humanconnectome.org/consortia/, accessed 12-28-2010).

View Map + Bookmark Entry

2010 – 2012

The First Brain-Computer Interface Product Offered for Sale March 2 – March 6, 2010

At the CeBit exhibition in Hannover, Germany March 2-6, 2010, Christoph Guger of Guger Technologies (g.tech) of Graz, Austria, offered intendiX, "the world's first personal Brain Computer Interface speller" for sale at the retail price of €9000.

"The world’s first patient-ready and commercially available brain computer interface just arrived at CeBIT 2010. The Intendix from Guger Technologies (g*tec) is a system that uses an EEG cap to measure brain activity in order to let you type with your thoughts. Meant to work with those with locked-in syndrome, or other disabilities, Intendix is simple enough to use after just 10 minutes of training. You simply focus on a grid of letters as they flash. When your desired letter lights up, brain activity spikes and Intendix types it. As users master the system, a few will be able to type as quickly as 1 letter a second. Besides typing, it can also trigger alarms, convert text to speech, print, copy, or email" (http://singularityhub.com/2010/03/07/intendix-the-brain-computer-interface-goes-commercial-video/, accessed 03-16-2010).

♦ In December 2013 a video of intendiX in operation, entitled Select words by thinking—world record, was available from YouTube at this link.

View Map + Bookmark Entry

Can an Artificial Intelligence Get into the University of Tokyo? 2011

In 2011 National Institute of Informatics in Japan initiated the Todai Robot Project with the goal of achieving a high score on the National Center Test for University Admissions by 2016, and passing the University of Tokyo entrance exam in 2021. 

"INTERVIEW WITH Yusuke Miyao, June 2013

Associate Professor, Digital Content and Media Sciences Research Division, NII; Associate Professor, Department of Informatics; "Todai Robot Project" Sub-Project Director 

Can a Robot Get Into the University of Tokyo? 
The Challenges Faced by the Todai Robot Project

Tainaka Could you tell us the objectives of the project?
 
Miyao We are researching the process of thinking by developing a computer program that will be able to pass the University of Tokyo entrance exam. The program will need to integrate multiple artificial intelligence technologies, such as language understanding, in order to develop all of the processes, from reading the question to determining the correct answer. While the process of thinking is first-nature to people, many of the processes involved in mental computation are still mysteries, so the project will be taking on challenges that previous artificial intelligence research has yet to touch.
 
Tainaka You're not going to making a physical robot?
 
Miyao No. What we'll be making is a robot brain. It won't be an actual robot that walks through the gate, goes to the testing site, picks up a pencil, and answers the questions.
 
Tainaka Why was passing the university entrance exam selected as the project's goal?
 
Miyao The key point is that what's difficult for people is different than what's difficult for computers. Computers excel at calculation, and can beat professional chess and shogi players at their games. IBM's "Watson" question-answering system*1 became a quiz show world champion. For a person, beating a professional shogi player is far harder than passing the University of Tokyo entrance exam, but for a computer, shogi is easier. What makes the University of Tokyo entrance exam harder is that the rules are less clearly defined than they are for shogi or a quiz show. From the perspective of using knowledge and data to answer questions, the university entrance exam requires a more human-like approach to information processing. However, it does not rely as much on common sense as an elementary school exam or everyday life, so it's a reasonable target for the next step in artificial intelligence research.
Tainaka Elementary school exam questions are more difficult?
 
Miyao For example, consider the sentence "Assuming there is a factory that can build 3 cars per day, how many days would it take to build 12 cars?" A computer would not be able to create a formula that expresses this in the same way a person could, near-instantaneously. It wouldn't understand the concepts of "car" or "factory", so it wouldn't be able to understand the relationship between them. Compared to that, calculating integrals is far easier.
 
Tainaka The National Center Test for University Admissions is multiple choice, and the second-stage exam is a short answer exam, right?
 
Miyao Of course, the center test is easier, and it has clear right and wrong answers, making it easier to grade. For the second-stage exam, examinees must give written answers, so during the latter half of the project, we will be shifting our focus on creating answers which are clear and comprehensible to human readers.
 
Tainaka Does the difficulty vary by test subject?
 
Miyao What varies more than the difficulty itself are the issues that have to be tackled by artificial intelligence research. The social studies questions, which test knowledge, rely on memory, so one might assume they would be easy for computers, but it's actually difficult for a computer to determine if the text of a problem corresponds to knowledge the computer possesses. What makes that identification possible is "Textual Entailment Recognition"*2, an area in which we are making progress, but still face many challenges. Ethics questions, on the other hand, frequently cover common sense, and require the reader to understand the Japanese language, so they are especially difficult for computers, which lack this common sense. Personally, I had a hard time with questions requiring memorization, so I picked ethics. (laughs)
 
Tainaka So ethics and language questions are difficult because they involve common sense.
 
Miyao Similar challenges are encountered with English, other than the common sense issue. For example, English questions include fill-in-the-blank questions, but it's difficult to pick natural conversational answers without actual life experience. Reading comprehension questions test logical and rational thought, but it's not really clear what this "logical and rational thought" consists of. The question, then, is how to teach "logical and rational thought" to computers. Also, for any subject, questions sometimes include photos, graphs, and comic strips. Humans understand them unconsciously, but it's extremely difficult to have computers understand them.
 
Tainaka Aren't mathematical formula questions easy to answer?
 
Miyao If they were presented as pure formulas, computers would excel at them, but the reality is not so simple. The questions themselves are written in natural language, making it difficult to map to the non-linguistic world of formulas. The same difficulty can be found with numerical fields, like physics or chemistry, or in fields which are difficult to convert into computer-interpretable symbols, such as the emotional and situational experience of reading a novel. That's what makes elementary school exams difficult.
 
Tainaka There are a mountain of problems.
 
Miyao There are many problems that nobody has yet taken on. That's what makes it challenging, and it's very exciting working with people from different fields. Looking at the practical results of this project, our discoveries and developments will be adapted for use in general purpose systems, such as meaning-based searching and conversation systems, real-world robot interfaces, and the like. The Todai Robot Project covers a diverse range of research fields, and NII plans to build an infrastructure, organizing data and creating platforms, and bring in researchers from both inside and outside Japan to achieve our objectives. In the future we will build an even more open platform, creating opportunities for members of the general public to participate as well, and I hope anyone motivated will take part" (http://21robot.org/introduce/NII-Interview/, accessed 12-30-2013).
View Map + Bookmark Entry

Worldwide Technological Capacity to Store, Communicate, and Compute Information February 10, 2011

On February 10, 2011 social scientist Martin Hilbert of the University of Southern California (USC) and information scientist Priscilla López of the Open University of Catalonia published "The World's Technological Capacity to Store, Communicate, and Compute Information." The report appeared first in Science Express; on April 1, 2011 it was published in Science, 332, 60-64. This was "the first time-series study to quantify humankind's ability to handle information." Notably, the authors did not attempt to address the information processing done by human brains—possibly impossible to quantify at the present time, if ever. 

"We estimated the world’s technological capacity to store, communicate, and compute information, tracking 60 analog and digital technologies during the period from 1986 to 2007. In 2007, humankind was able to store 2.9 × 10 20 optimally compressed bytes, communicate almost 2 × 10 21 bytes, and carry out 6.4 × 10 18 instructions per second on general-purpose computers. General-purpose computing capacity grew at an annual rate of 58%. The world’s capacity for bidirectional telecommunication grew at 28% per year, closely followed by the increase in globally stored information (23%). Humankind’s capacity for unidirectional information diffusion through broadcasting channels has experienced comparatively modest annual growth (6%). Telecommunication has been dominated by digital technologies since 1990 (99.9% in digital format in 2007), and the majority of our technological memory has been in digital format since the early 2000s (94% digital in 2007)" (The authors' summary).

"To put our findings in perspective, the 6.4 × 10 18 instructions per second that humankind can carry out on its general-purpose computers in 2007 are in the same ballpark area as the maximum number of nerve impulses executed by one human brain per second (10 17 ). The 2.4 × 10 21 bits stored by humanity in all of its technological devices in 2007 is approaching an order of magnitude of the roughly 10 23 bits stored in the DNA of a human adult, but it is still minuscule as compared with the 10 90 bits stored in the observable universe. However, in contrast to natural information processing, the world’s technological information processing capacities are quickly growing at clearly exponential rates" (Conclusion of the paper).

"Looking at both digital memory and analog devices, the researchers calculate that humankind is able to store at least 295 exabytes of information. (Yes, that's a number with 20 zeroes in it.)

"Put another way, if a single star is a bit of information, that's a galaxy of information for every person in the world. That's 315 times the number of grains of sand in the world. But it's still less than one percent of the information that is stored in all the DNA molecules of a human being. 2002 could be considered the beginning of the digital age, the first year worldwide digital storage capacity overtook total analog capacity. As of 2007, almost 94 percent of our memory is in digital form.

"In 2007, humankind successfully sent 1.9 zettabytes of information through broadcast technology such as televisions and GPS. That's equivalent to every person in the world reading 174 newspapers every day. On two-way communications technology, such as cell phones, humankind shared 65 exabytes of information through telecommunications in 2007, the equivalent of every person in the world communicating the contents of six newspapers every day.

"In 2007, all the general-purpose computers in the world computed 6.4 x 10^18 instructions per second, in the same general order of magnitude as the number of nerve impulses executed by a single human brain. Doing these instructions by hand would take 2,200 times the period since the Big Bang.

"From 1986 to 2007, the period of time examined in the study, worldwide computing capacity grew 58 percent a year, ten times faster than the United States' GDP. Telecommunications grew 28 percent annually, and storage capacity grew 23 percent a year" (http://www.sciencedaily.com/releases/2011/02/110210141219.htm)

View Map + Bookmark Entry

IBM's Watson Question Answering System Defeats Humans at Jeopardy! February 14 – February 16, 2011

LOn February 14, 2011 IBM's Watson question answering system supercomputer, developed at IBM's T J Watson Research Center, Yorktown Heights, New York, running DeepQA software, defeated the two best human Jeopardy! players, Ken Jennings and Brad Rutter. Watson's hardware consisted of 90 IBM Power 750 Express servers. Each server utilized a 3.5 GHz POWER7 eight-core processor, with four threads per core. The system operatesd with 16 terabytes of RAM.

The success of the machine underlines very significant advances in deep analytics and the ability of a machine to process unstructured data, and especially to intepret and speak natural language.

"Watson is an effort by I.B.M. researchers to advance a set of techniques used to process human language. It provides striking evidence that computing systems will no longer be limited to responding to simple commands. Machines will increasingly be able to pick apart jargon, nuance and even riddles. In attacking the problem of the ambiguity of human language, computer science is now closing in on what researchers refer to as the “Paris Hilton problem” — the ability, for example, to determine whether a query is being made by someone who is trying to reserve a hotel in France, or simply to pass time surfing the Internet.  

"If, as many predict, Watson defeats its human opponents on Wednesday, much will be made of the philosophical consequences of the machine’s achievement. Moreover, the I.B.M. demonstration also foretells profound sociological and economic changes.  

"Traditionally, economists have argued that while new forms of automation may displace jobs in the short run, over longer periods of time economic growth and job creation have continued to outpace any job-killing technologies. For example, over the past century and a half the shift from being a largely agrarian society to one in which less than 1 percent of the United States labor force is in agriculture is frequently cited as evidence of the economy’s ability to reinvent itself.  

"That, however, was before machines began to 'understand' human language. Rapid progress in natural language processing is beginning to lead to a new wave of automation that promises to transform areas of the economy that have until now been untouched by technological change.  

" 'As designers of tools and products and technologies we should think more about these issues,' said Pattie Maes, a computer scientist at the M.I.T. Media Lab. Not only do designers face ethical issues, she argues, but increasingly as skills that were once exclusively human are simulated by machines, their designers are faced with the challenge of rethinking what it means to be human.  

"I.B.M.’s executives have said they intend to commercialize Watson to provide a new class of question-answering systems in business, education and medicine. The repercussions of such technology are unknown, but it is possible, for example, to envision systems that replace not only human experts, but hundreds of thousands of well-paying jobs throughout the economy and around the globe. Virtually any job that now involves answering questions and conducting commercial transactions by telephone will soon be at risk. It is only necessary to consider how quickly A.T.M.’s displaced human bank tellers to have an idea of what could happen" (John Markoff,"A Fight to Win the Future: Computers vs. Humans," http://www.nytimes.com/2011/02/15/science/15essay.html?hp, accessed 02-17-2011).

♦ As a result of this technological triumph, IBM took the unusal step of building a colorful website concerning all aspects of Watson, including numerous embedded videos.

♦ A few of many articles on the match published during or immediately after it included:

John Markoff, "Computer Wins on 'Jeopardy!': Trivial, It's Not," http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?hpw

Samara Lynn, "Dissecting IBM Watson's Jeopardy! Game," PC Magazinehttp://www.pcmag.com/article2/0,2817,2380351,00.asp

John C. Dvorak, "Watson is Creaming the Humans. I Cry Foul," PC Magazinehttp://www.pcmag.com/article2/0,2817,2380451,00.asp

Henry Lieberman published a three-part article in MIT Technology Review, "A Worthwhile Contest for Artificial Intelligence" http://www.technologyreview.com/blog/guest/26391/?nlid=4132

♦ An article which discussed the weaknesses of Watson versus a human in Jeopardy! was Greg Lindsay, "How I Beat IBM's Watson at Jeopardy! (3 Times)" http://www.fastcompany.com/1726969/how-i-beat-ibms-watson-at-jeopardy-3-times

♦ An opinion column emphasizing the limitations of Watson compared to the human brain was Stanley Fish, "What Did Watson the Computer Do?" http://opinionator.blogs.nytimes.com/2011/02/21/what-did-watson-the-computer-do/

♦ A critical response to Stanley Fish's column by Sean Dorrance Kelly and Hubert Dreyfus, author of What Computers Can't Dowas published in The New York Times at: http://opinionator.blogs.nytimes.com/2011/02/28/watson-still-cant-think/?nl=opinion&emc=tya1

View Map + Bookmark Entry

How Search Engines Have Become a Primary Form of External or Transactive Memory July 14, 2011

Betsy Sparrow of Columbia University, Jenny Liu, and Daniel M. Wegner of Harvard University published "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips," published online 14 July 2011, Science 5 August 2011: Vol. 333 no. 6043 pp. 776-778 DOI: 10.1126/science.1207745.

Abstract: 

"The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves."

First two paragraphs (footnotes removed):

"In a development that would have seemed extraordinary just over a decade ago, many of us have constant access to information. If we need to find out the score of a ball game, learn how to perform a complicated statistical test, or simply remember the name of the actress in the classic movie we are viewing, we need only turn to our laptops, tablets, or smartphones and we can find the answers immediately. It has become so commonplace to look up the answer to any question the moment it occurs that it can feel like going through withdrawal when we can’t find out something immediately. We are seldom offline unless by choice, and it is hard to remember how we found information before the Internet became a ubiquitous presence in our lives. The Internet, with its search engines such as Google and databases such as IMDB and the information stored there, has become an external memory source that we can access at any time.

"Storing information externally is nothing particularly novel, even before the advent of computers. In any long-term relationship, a team work environment, or other ongoing group, people typically develop a group or transactive memory (1), a combination of memory stores held directly by individuals and the memory stores they can access because they know someone who knows that information. Like linked computers that can address each other’s memories, people in dyads or groups form transactive memory systems (2, 3). The present research explores whether having online access to search engines, databases, and the like, has become a primary transactive memory source in itself. We investigate whether the Internet has become an external memory system that is primed by the need to acquire information. If asked the question whether there are any countries with only one color in their flag, for example, do we think about flags or immediately think to go online to find out? Our research then tested whether, once information has been accessed, our internal encoding is increased for where the information is to be found rather than for the information itself."

An article by Alexander Bloom published in Harvard Magazine, November 2011 had this to say regarding the research:

"Wegner, the senior author of the study, believes the new findings show that the Internet has become part of a transactive memory source, a method by which our brains compartmentalize information. First hypothesized by Wegner in 1985, transactive memory exists in many forms, as when a husband relies on his wife to remember a relative’s birthday. '[It is] this whole network of memory where you don’t have to remember everything in the world yourself,' he says. 'You just have to remember who knows it.' Now computers and technology as well are becoming virtual extensions of our memory. The idea validates habits already forming in our daily lives. Cell phones have become the primary location for phone numbers. GPS devices in cars remove the need to memorize directions. Wegner points out that we never have to stretch our memories too far to remember the name of an obscure movie actor or the capital of Kyrgyzstan—we just type our questions into Google. 'We become part of the Internet in a way,' he says. 'We become part of the system and we end up trusting it.' "(http://harvardmagazine.com/2011/11/how-the-web-affects-memory, accessed 12-11-2011).

View Map + Bookmark Entry

The First Neurosynaptic Chips August 2011

In August 2011, as part of the SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) project, IBM researchers led by Dharmendra S. Modha, manager and lead researcher of the Cognitive Computing Group at IBM Almaden Research Center, demonstrated two neurosynaptic cores that moved beyond von Neumann architecture and programming to ultra-low, super-dense brain-inspired cognitive computing chips. These new silicon, neurosynaptic chips would be the building blocks for computing systems that emulate the brain's computing efficiency, size and power usage.

View Map + Bookmark Entry

Toward Cognitive Computing Systems August 18, 2011

On August 18, 2011 "IBM researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. The technology could yield many orders of magnitude less power consumption and space than used in today’s computers. 

"In a sharp departure from traditional concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry. Its first two prototype chips have already been fabricated and are currently undergoing testing.  

"Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember – and learn from – the outcomes, mimicking the brains structural and synaptic plasticity.  

"To do this, IBM is combining principles from nanoscience, neuroscience and supercomputing as part of a multi-year cognitive computing initiative. The company and its university collaborators also announced they have been awarded approximately $21 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

"The goal of SyNAPSE is to create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment – all while rivaling the brain’s compact size and low power usage. The IBM team has already successfully completed Phases 0 and 1.  

" 'This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century,' said Dharmendra Modha, project leader for IBM Research. 'Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture. These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government.' " (http://www-03.ibm.com/press/us/en/pressrelease/35251.wss, accessed 08-21-2011).

View Map + Bookmark Entry

A Silicon Chip that Mimics How the Brain's Synapses Change in Response to New Information November 2011

In November 2011, a group of MIT researchers created the first computer chip that mimicked how the brain's neurons adapt in response to new information. This biological phenomenon, known as plasticity, is analog, ion-based communication in a synapse between two neurons. With about 400 transistors, the silicon chip can simulated the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. 

"There are about 100 billion neurons in the brain, each of which forms synapses with many other neurons. A synapse is the gap between two neurons (known as the presynaptic and postsynaptic neurons). The presynaptic neuron releases neurotransmitters, such as glutamate and GABA, which bind to receptors on the postsynaptic cell membrane, activating ion channels. Opening and closing those channels changes the cell’s electrical potential. If the potential changes dramatically enough, the cell fires an electrical impulse called an action potential.

"All of this synaptic activity depends on the ion channels, which control the flow of charged atoms such as sodium, potassium and calcium. Those channels are also key to two processes known as long-term potentiation (LTP) and long-term depression (LTD), which strengthen and weaken synapses, respectively. "

"The MIT researchers designed their computer chip so that the transistors could mimic the activity of different ion channels. While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell. 

“ 'We can tweak the parameters of the circuit to match specific ion channels,” Poon says. 'We now have a way to capture each and every ionic process that’s going on in a neuron.'

"Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says" (http://www.mit.edu/newsoffice/2011/brain-chip-1115.html, accessed 01-01-2014).

Rachmuth, G., Shouvai, H., Bear, M., Poon, C. "A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity," Proceedings of the National Academy of Sciences 108, no. 459, December 6, 2011, E1266-E1274, doi: 10.1073/pnas.1106161108

View Map + Bookmark Entry

2012 – 2016

The First Functioning Brain-Computer Interface for Quadriplegics May 16, 2012

On May 16, 2012 Leigh R. Hochberg, Daniel Bacher and team published "Reach and grasp by people with tetraplegia using a neurally controlled robotic arm," Nature 485 (17 May 2012) 372-75.  This was the first published demonstration that humans with severe brain injuries could effectively control a prosthetic arm, using tiny brain implants that transmitted neural signals to a computer.

"Paralysis following spinal cord injury, brainstem stroke, amyotrophic lateral sclerosis and other disorders can disconnect the brain from the body, eliminating the ability to perform volitional movements. A neural interface system could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices Able-bodied monkeys have used a neural interface system to control a robotic arm, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here we demonstrate the ability of two people with long-standing tetraplegia to use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm and hand over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor 5 years earlier, also used a robotic arm to drink coffee from a bottle. Although robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals" (http://www.nature.com/nature/journal/v485/n7398/full/nature11076.html#/ref

"The researchers still have many hurdles to clear before this technology becomes practical in the real world, experts said. The equipment used in the study is bulky, and the movements made with the robot are still crude. And the silicon implants generally break down over time (though the woman in the study has had hers for more than five years, and it is still effective).  

"No one has yet demonstrated an effective wireless system, nor perfected one that could bypass the robotics altogether — transmitting brain signals directly to muscles — in a way that allows for complex movements. 

"In an editorial accompanying the study, Andrew Jackson of the Institute of Neuroscience at Newcastle University wrote that economics might be the largest obstacle: 'It remains to be seen whether a neural-interface system that will be of practical use to patients with diverse clinical needs can become a commercially viable proposition' ' (http://www.nytimes.com/2012/05/17/science/bodies-inert-they-moved-a-robot-with-their-minds.html?hpw, accessed 05-17-2012)

View Map + Bookmark Entry

A Large Scale Neural Network Appears to Emulate Activity in the Visual Cortex June 26, 2012

At the International Conference on Machine Learning held in Edinburgh, Scotland from June 26–July 1, 2012 researchers at Google and Stanford University reported that they developed software modeled on the way biological neurons interact with each other that taught itself to distinguish objects in ­YouTube videos. Although it was most effective recognizing cats and human faces, the system obtained 15.8% accuracy in recognizing 22,000 object categories from ImageNet, or 3,200 items in all, a 70 percent improvement over the previous best-performing software. To do so the scientists connected 16,000 computer processors to create a neural network for machine learning with more than one billion connections. Then they turned the neural network loose on the Internet to learn on its own.

Having been presented with the experimental results before the meeting, on June 25, 2012 John Markoff published an article entitled "How Many Computers to Identify a Cat? 16,000," from which I quote selections"

"Presented with 10 million digital images selected from YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats....

"The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.

"Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech....

"The [YouTube] videos were selected randomly and that in itself is an interesting comment on what interests humans in the Internet age. However, the research is also striking. That is because the software-based neural network created by the researchers appeared to closely mirror theories developed by biologists that suggest individual neurons are trained inside the brain to detect significant objects.

"Currently much commercial machine vision technology is done by having humans 'supervise' the learning process by labeling specific features. In the Google research, the machine was given no help in identifying features.

“ 'The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,' Dr. Ng said.

“ 'We never told it during the training, ‘This is a cat,’ ' said Dr. Dean, who originally helped Google design the software that lets it easily break programs into many tasks that can be computed simultaneously. 'It basically invented the concept of a cat. We probably have other ones that are side views of cats.'

"The Google brain assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images. The scientists said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain’s visual cortex.

"Neuroscientists have discussed the possibility of what they call the 'grandmother neuron,' specialized cells in the brain that fire when they are exposed repeatedly or “trained” to recognize a particular face of an individual.

“ 'You learn to identify a friend through repetition,' said Gary Bradski, a neuroscientist at Industrial Perception, in Palo Alto, Calif.

"While the scientists were struck by the parallel emergence of the cat images, as well as human faces and body parts in specific memory regions of their computer model, Dr. Ng said he was cautious about drawing parallels between his software system and biological life.

“ 'A loose and frankly awful analogy is that our numerical parameters correspond to synapses,' said Dr. Ng. He noted that one difference was that despite the immense computing capacity that the scientists used, it was still dwarfed by the number of connections found in the brain.

“ 'It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,' the researchers wrote.

"Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.

“ 'The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts,' said David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing. He said that rapid increases in computer technology would close the gap within a relatively short period of time: “The scale of modeling the full human visual cortex may be within reach before the end of the decade.”

"Google scientists said that the research project had now moved out of the Google X laboratory and was being pursued in the division that houses the company’s search business and related services. Potential applications include improvements to image search, speech recognition and machine language translation.

"Despite their success, the Google researchers remained cautious about whether they had hit upon the holy grail of machines that can teach themselves.

“ 'It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet,' said Dr. Ng.

Quoc V. Le,  Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeff DeanAndrew Y. Ng, "Building High-level Features Using Large Scale Upervised Learning," arXiv:1112.6209 [cs.LG] 12 July 2012.  

View Map + Bookmark Entry

Memcomputing Outlined November 19, 2012

On November 19, 2012 physicists Massimiliano Di Ventra at the University of California, San Diego and Yuriy Pershin at the University of South Carolina, Columbia, outlined an emerging form of computation called memcomputing based on the discovery of nanoscale electronic components that simultaneously store and process information, much like the human brain.

At the heart of this new form of computing are nanodevices called the memristor, memcapacitor and meminductor, fundamental electronic components that store information while respectively operating as resistors, capacitors and inductors. These devices were predicted theoretically in the 1970s but first manufactured in 2008. Because these devices consume very little energy computers using them could approach the energy efficiency of natural systems such as the human brain for the first time.  

"In present day technology, storing and processing of information occur on physically distinct regions of space. Not only does this result in space limitations; it also translates into unwanted delays in retrieving and processing of relevant information. There is, however, a class of two-terminal passive circuit elements with memory, memristive, memcapacitive and meminductive systems – collectively called memelements – that perform both information processing and storing of the initial, intermediate and final computational data on the same physical platform. Importantly, the states of these memelements adjust to input signals and provide analog capabilities unavailable in standard circuit elements, resulting in adaptive circuitry, and providing analog massively-parallel computation. All these features are tantalizingly similar to those encountered in the biological realm, thus offering new opportunities for biologically-inspired computation. Of particular importance is the fact that these memelements emerge naturally in nanoscale systems, and are therefore a consequence and a natural by-product of the continued miniaturization of electronic devices. . . ." (Di Ventra & Pershin, "Memcomputing: a computing paradigm to store and process information on the same physical platform," http://arxiv.org/pdf/1211.4487v1.pdf, accessed 11-22-2012). 

View Map + Bookmark Entry

"The Human Brain Project" is Launched, with the Goal of Creating a Supercomputer-Based Simulation of the Human Brain January 28, 2013

On January 28, 2013 The European Commission announced funding for The Human Brain Project.

From the press release:

"The goal of the Human Brain Project is to pull together all our existing knowledge about the human brain and to reconstruct the brain, piece by piece, in supercomputer-based models and simulations. The models offer the prospect of a new understanding of the human brain and its diseases and of completely new computing and robotic technologies. On January 28, the European Commission supported this vision, announcing that it has selected the HBP as one of two projects to be funded through the new FET Flagship Program.

''Federating more than 80 European and international research institutions, the Human Brain Project is planned to last ten years (2013-2023). The cost is estimated at 1.19 billion euros. The project will also associate some important North American and Japanese partners. It will be coordinated at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, by neuroscientist Henry Markram with co-directors Karlheinz Meier of Heidelberg University, Germany, and Richard Frackowiak of Centre Hospitalier Universitaire Vaudois (CHUV) and the University of Lausanne (UNIL).

The Swiss Contribution

"Switzerland plays a vital role in the Human Brain Project. Henry Markram and his team at EPFL will coordinate the project and will also be responsible for the development and operation of the project’s Brain Simulation Platform. Richard Frackowiak and his team will be in charge of the project’s medical informatics platform; the Swiss Supercomputing Centre in Lugano will provide essential supercomputing facilities. Many other Swiss groups are also contributing to the project. Through the ETH Board, the Swiss Federal Government has allocated 75 million CHF (approximately 60 million Euros) for the period 2013-2017, to support the efforts of both Henry Markram’s laboratory at EPFL and the Swiss Supercomputing Center in Lugano. The Canton of Vaud will give 35 million CHF (28 million Euros) to build a new facility called Neuropolis for in silico life science, and centered around the Human Brain Project. This building will also be supported by the Swiss Confederation, the Rolex Group and third-party sponsors.

"The selection of the Human Brain Project as a FET Flagship is the result of more than three years of preparation and a rigorous and severe evaluation by a large panel of independent, high profile scientists, chosen by the European Commission. In the coming months, the partners will negotiate a detailed agreement with the Community for the initial first two and a half year ramp-up phase (2013-mid 2016). The project will begin work in the closing months of 2013."

View Map + Bookmark Entry

"The Reading Brain in the Digital Age: The Science of Paper versus Screens" April 11, 2013

On April 11, 2013 scientificamerican.com, the online version of Scientific American magazine, published "The Reading Brain in the Digital Age: The Science of Paper versus Screens" by Ferris Jabr. From this I quote a portion:

"Before 1992 most studies concluded that people read slower, less accurately and less comprehensively on screens than on paper. Studies published since the early 1990s, however, have produced more inconsistent results: a slight majority has confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens. And recent surveys suggest that although most people still prefer paper—especially when reading intensively—attitudes are changing as tablets and e-reading technology improve and reading digital books for facts and fun becomes more common. In the U.S., e-books currently make up between 15 and 20 percent of all trade book sales.

"Even so, evidence from laboratory experiments, polls and consumer reports indicates that modern screens and e-readers fail to adequately recreate certain tactile experiences of reading on paper that many people miss and, more importantly, prevent people from navigating long texts in an intuitive and satisfying way. In turn, such navigational difficulties may subtly inhibit reading comprehension. Compared with paper, screens may also drain more of our mental resources while we are reading and make it a little harder to remember what we read when we are done. A parallel line of research focuses on people's attitudes toward different kinds of media. Whether they realize it or not, many people approach computers and tablets with a state of mind less conducive to learning than the one they bring to paper.

" 'There is physicality in reading,' says developmental psychologist and cognitive scientist Maryanne Wolf of Tufts University, 'maybe even more than we want to think about as we lurch into digital reading—as we move forward perhaps with too little reflection. I would like to preserve the absolute best of older forms, but know when to use the new.' "

View Map + Bookmark Entry

The First NeuroGaming Conference Takes Place May 1 – May 2, 2013

On May 1-2, 2013 the first NeuroGaming Conference and Expo took place at the YetiZen Innovation Lab, 540 Howard St., San Francisco. It was organized by Zack Lynch, founder of the Neurotechnology Industry Organization. Three hundred people attended.

View Map + Bookmark Entry

A New Software Ecosystem to Program SyNAPSE Chips August 8, 2013

August 8, 2013 Dharmendra S. Modha, senior manager and principal investigator at the Cognitive Computing Group at IBM Almaden Research Center, unveiled a new software ecosystem to program SyNAPSE chips, which "have an architecture inspired by the function, low power, and comptact volume of the brain." 

“ 'We are working to create a FORTRAN for synaptic computing chips. While complementing today’s computers, this will bring forth a fundamentally new technological capability in terms of programming and applying emerging learning systems.'

"To advance and enable this new ecosystem, IBM researchers developed the following breakthroughs that support all aspects of the programming cycle from design through development, debugging, and deployment: 

"-         Simulator: A multi-threaded, massively parallel and highly scalable functional software simulator of a cognitive computing architecture comprising a network of neurosynaptic cores.  

"-         Neuron Model: A simple, digital, highly parameterized spiking neuron model that forms a fundamental information processing unit of brain-like computation and supports a wide range of deterministic and stochastic neural computations, codes, and behaviors. A network of such neurons can sense, remember, and act upon a variety of spatio-temporal, multi-modal environmental stimuli. 

"-         Programming Model: A high-level description of a “program” that is based on composable, reusable building blocks called “corelets.” Each corelet represents a complete blueprint of a network of neurosynaptic cores that specifies a based-level function. Inner workings of a corelet are hidden so that only its external inputs and outputs are exposed to other programmers, who can concentrate on what the corelet does rather than how it does it. Corelets can be combined to produce new corelets that are larger, more complex, or have added functionality. 

"-         Library: A cognitive system store containing designs and implementations of consistent, parameterized, large-scale algorithms and applications that link massively parallel, multi-modal, spatio-temporal sensors and actuators together in real-time. In less than a year, the IBM researchers have designed and stored over 150 corelets in the program library.  

"-         Laboratory: A novel teaching curriculum that spans the architecture, neuron specification, chip simulator, programming language, application library and prototype design models. It also includes an end-to-end software environment that can be used to create corelets, access the library, experiment with a variety of programs on the simulator, connect the simulator inputs/outputs to sensors/actuators, build systems, and visualize/debug the results" (http://www-03.ibm.com/press/us/en/pressrelease/41710.wss, accessed 10-20-2013).

View Map + Bookmark Entry

Monkeys Use Brain-Machine Interface to Move Two Virtual Arms with their Brain Activity November 6, 2013

In a study led by neuroscientist Miguel A. L. Nicolelis and the Nicolelis Lab at Duke University, monkeys learned to control the movement of both arms on an avatar using just their brain activity. The findings, published on November 6, 2013 in Science Translational Medicine, advanced efforts to develop bilateral movement in brain-controlled prosthetic devices for severely paralyzed patients, and raised the hope that patients might eventually be able to use brain-machine interfaces (BMIs) to control two arms. To enable the monkeys to control two virtual arms, researchers recorded nearly 500 neurons from multiple areas in both cerebral hemispheres of the animals’ brains, the largest number of neurons recorded and reported to date.

"While the monkeys were moving two hands, the researchers saw distinct patterns of neuronal activity that differed from the activity seen when a monkey moved each hand separately. Through such research on brain–machine interfaces, scientists may not only develop important medical devices for people with movement disorders, but they may also learn about the complex neural circuits that control behavior....

“Simply summing up the neuronal activity correlated to movements of the right and left arms did not allow us to predict what the same individual neurons or neuronal population would do when both arms were engaged together in a bimanual task,” said Nicolelis in a released statement. “This finding points to an emergent brain property – a non-linear summation – for when both hands are engaged at once" (www.technologyreview.com/view/521471/monkeys-drive-two-virtual-arms-with-their-thoughts/, accessed 11-09-2013).

P. J. Ifft, S. Shokur, Z. Li, M. A. Lebedev, M. A. L. Nicolelis,"A Brain-Machine Interface Enables Bimanual Arm Movements in Monkeys", Sci. Transl. Med. 5210ra154 (2013).

View Map + Bookmark Entry

DeepFace, Facial Verification Software Developed at Facebook, Approaches Human Ability March 17, 2014

On March 17, 2014 MIT Technology Review published an article by Tim Simonite on Facebook's facial recognition software, DeepFace, which I quote:

"Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.

"That’s a significant advance over previous face-matching software, and it demonstrates the power of a new approach to artificial intelligence known as deep learning, which Facebook and its competitors have bet heavily on in the past year (see 'Deep Learning'). This area of AI involves software that uses networks of simulated neurons to learn to recognize patterns in large amounts of data.

"'You normally don’t see that sort of improvement,' says Yaniv Taigman, a member of Facebook’s AI team, a research group created last year to explore how deep learning might help the company (see 'Facebook Launches Advanced AI Effort'). 'We closely approach human performance,' says Taigman of the new software. He notes that the error rate has been reduced by more than a quarter relative to earlier software that can take on the same task.

"Facebook’s new software, known as DeepFace, performs what researchers call facial verification (it recognizes that two images show the same face), not facial recognition (putting a name to a face). But some of the underlying techniques could be applied to that problem, says Taigman, and might therefore improve Facebook’s accuracy at suggesting whom users should tag in a newly uploaded photo.

"However, DeepFace remains purely a research project for now. Facebook released a research paper on the project last week, and the researchers will present the work at the IEEE Conference on Computer Vision and Pattern Recognition in June. 'We are publishing our results to get feedback from the research community,' says Taigman, who developed DeepFace along with Facebook colleagues Ming Yang and Marc’Aurelio Ranzato and Tel Aviv University professor Lior Wolf.

"DeepFace processes images of faces in two steps. First it corrects the angle of a face so that the person in the picture faces forward, using a 3-D model of an 'average' forward-looking face. Then the deep learning comes in as a simulated neural network works out a numerical description of the reoriented face. If DeepFace comes up with similar enough descriptions from two different images, it decides they must show the same face.

"The performance of the final software was tested against a standard data set that researchers use to benchmark face-processing software, which has also been used to measure how humans fare at matching faces.

"Neeraj Kumar, a researcher at the University of Washington who has worked on face verification and recognition, says that Facebook’s results show how finding enough data to feed into a large neural network can allow for significant improvements in machine-learning software. 'I’d bet that a lot of the gain here comes from what deep learning generally provides: being able to leverage huge amounts of outside data in a much higher-capacity learning model,' he says.

"The deep-learning part of DeepFace consists of nine layers of simple simulated neurons, with more than 120 million connections between them. To train that network, Facebook’s researchers tapped a tiny slice of data from their company’s hoard of user images—four million photos of faces belonging to almost 4,000 people. 'Since they have access to lots of data of this form, they can successfully train a high-capacity model,' says Kumar.

View Map + Bookmark Entry

The First Production-Scale Neuromorphic Computing Chip August 8, 2014

On August 8, 2014 scientists from IBM and Cornell University, including Paul A. MerollaJohn V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, and Dharmendra S. Modha, reported in the journal Science the first production-scale neuromorphic computing chip—a significant landmark in the development of cognitive computing. The chip, named TrueNorth, attempted to mimic the way brains recognize patterns, relying on densely interconnected webs of transistors similar to neural networks in the brain. It employed an efficient, scalable, and flexible non–von Neumann architecture. Von Neumann architecture, in which memory and processing were separated, and information flowed back and forth between the two components, remained the standard computer architecture from the design of the earliest electronic computers to 2014, so the new neuromorphic chip design represented a radical departure. 

"The chip contains 5.4 billion transistors, yet draws just 70 milliwatts of power. By contrast, modern Intel processors in today’s personal computers and data centers may have 1.4 billion transistors and consume far more power — 35 to 140 watts.

"Today’s conventional microprocessors and graphics processors are capable of performing billions of mathematical operations a second, yet the new chip system clock makes its calculations barely a thousand times a second. But because of the vast number of circuits working in parallel, it is still capable of performing 46 billion operations a second per watt of energy consumed, according to IBM researchers.

"The TrueNorth has one million 'neurons,' about as complex as the brain of a bee.

“ 'It is a remarkable achievement in terms of scalability and low power consumption,' said Horst Simon, deputy director of the Lawrence Berkeley National Laboratory.

"He compared the new design to the advent of parallel supercomputers in the 1980s, which he recalled was like moving from a two-lane road to a superhighway.

"The new approach to design, referred to variously as neuromorphic or cognitive computing, is still in its infancy, and the IBM chips are not yet commercially available. Yet the design has touched off a vigorous debate over the best approach to speeding up the neural networks increasingly used in computing.

"The idea that neural networks might be useful in processing information occurred to engineers in the 1940s, before the invention of modern computers. Only recently, as computing has grown enormously in memory capacity and processing speed, have they proved to be powerful computing tools" (John Markoff, "IBM Designs a New Chip that Functions Like A Brain," The New York Times, August 7, 2014).

Merolla et al, "A million spiking-neuron integrated circuit with a scalable communication network and interface," Science 345 no. 6197 (August 8, 2014) 668-673.

"Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts" (Abstract).

View Map + Bookmark Entry

Three Breakthroughs that Finally Unleased AI on the World October 27, 2014

In "The Three Breakthroughs That Have Finally Unleased AI on the World", Wired Magazine, October 27, 2014, writer Kevin Kelly of Pacifica, California explained how breakthroughs in cheap parallel computation, big data, and better algorithms were enabling new AI-based services that were previously the domain of sci-fi and academic white papers. Within the near future AI would play greater and greater roles in aspects of everyday life, in products like Watson developed by IBM, and products from Google, Facebook and other companies. More significant than these observations were Kelly's views about the impact that these developments would have on our lives and how we may understand the difference between machine and human intelligence:

"If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers. Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists.

"In fact, this won't really be intelligence, at least not as we've come to think of it. Indeed, intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them—and our most premium AI services will likely be advertised as consciousness-free.

"What we want instead of intelligence is artificial smartness. Unlike general intelligence, smartness is focused, measurable, specific. It also can think in ways completely different from human cognition. A cute example of this nonhuman thinking is a cool stunt that was performed at the South by Southwest festival in Austin, Texas, in March of this year. IBM researchers overlaid Watson with a culinary database comprising online recipes, USDA nutritional facts, and flavor research on what makes compounds taste pleasant. From this pile of data, Watson dreamed up novel dishes based on flavor profiles and patterns from existing dishes, and willing human chefs cooked them. One crowd favorite generated from Watson's mind was a tasty version of fish and chips using ceviche and fried plantains. For lunch at the IBM labs in Yorktown Heights I slurped down that one and another tasty Watson invention: Swiss/Thai asparagus quiche. Not bad! It's unlikely that either one would ever have occurred to humans.

"Nonhuman intelligence is not a bug, it's a feature. The chief virtue of AIs will be their alien intelligence. An AI will think about food differently than any chef, allowing us to think about food differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science and art. The alienness of artificial intelligence will become more valuable to us than its speed or power. . . .

View Map + Bookmark Entry