4406 entries. 94 themes. Last updated December 26, 2016.

Artificial Intelligence / Machine Learning / Deep Learning Timeline


1750 – 1800

Bayes's Theorem for Calculating Inverse Probabilities 1763

On April 7, 1761 Thomas Bayes, an English clergyman and mathematician, died at the age of 59. Two years after his death, his paper, entitled "An Essay Towards Solving a Problem in the Doctrine of ChancesThomas Bayes was published in the Philosophical Transactions of the Royal Society 53 (1763) 370-418. Bayes's paper enunciated Bayes's Theorem for calculating "inverse probabilities”—the basis for methods of extracting patterns from data in decision analysisdata mining, statistical learning machinesBayesian networksBayesian inference.

"Whereas the ordinary rules of probability address such problems as 'what is the probability of drawing a yellow marble, if you draw three marbles from a sack containing 10 yellow marbles and 90 white marbles,' a Bayesian might ask the question, 'if I draw five marbles from a sack, and one is yellow and four are white, what is the probable distribution of the marbles in the sack?'  The advantage of inverse probability is that predictions can be continually refined as experience accumulates, so that if you draw five more marbles, and they are all white, that will change the probability prediction (and drawing a blue marble would drastically alter the situation), but Bayes’ theorem can easily accommodate any and all new information.  Bayes wrote his classic paper, 'An Essay towards solving a Problem in the Doctrine of Chances,' sometime in the late 1740s, but he never published it, for reasons unknown. After his death, his friend Richard Price found the paper among Bayes’ effects, and Price sent it for publication to John Canton of the Royal Society of London (apparently modifying the original paper considerably), and it appeared in the Philosophical Transactions in 1763. No one paid it the slightest attention. Ten years later, the Frenchman Pierre Simon Laplace independently discovered the rules of inverse probability, and although he later learned about Bayes’ paper and gave him priority, for the next century and a half Laplace got most of the credit (when credit was given at all--most statisticians did not consider Bayesian methods to be reputable, since they often involved making hunches and using gut feelings).  It wasn't until 1950 that the famous geneticist and mathematician R.A. Fisher first applied Bayes’ name to the methods of inverse probability, and since then, Bayes’ reputation has been gradually restored" (William B. Ashworth, Jr., email received on April 7, 2014.)

Hook & Norman, Origins of Cyberspace (2002) no. 1.

(This entry was last revised on April 7, 2014.)

View Map + Bookmark Entry

Von Kempelen "Invents" the Chess-Playing Turk & Edgar Allan Poe Compares it to Babbage's Difference Engine No. 1 1769 – 1836

In 1769 Hungarian author and inventor Wolfgang von Kempelen (Johann Wolfgang Ritter von Kempelen de Pázmánd; Hungarian: Kempelen Farkas) built his chess-playing Turk, an automaton that purported to play chess. Although the machine displayed an elaborate gear mechanism, its cabinet actually concealed a man controlling the moves of the machine.

Von Kempelen's Turk became a commercial sensation, deceiving a very large number of people. It became the most famous, or the most notorious, automaton in history. It also must have been kind of an open secret within the professional chess community because over the years numerous chess masters were hired so that The Turk could challenge all comers with its chess skills. With a skilled concealed operator the Turk won most of the games played during its demonstrations around Europe and the Americas for nearly 84 years, playing and defeating many challengers including Napoleon Bonaparte and Benjamin Franklin. Although many had suspected the hidden human operator, the hoax was first revealed by the English engineer Robert Willis in his illustrated pamphlet, An Attempt to Analyse the Automaton Chess Player of Mr. de Kempelen. With an Easy Method of Imitating the Movements of the Celebrated Figure. . .  (London, 1821). The operator or operators working within the mechanism during Kempelen's original tour remain a mystery; however after the engineer Johann Nepomuk Mälzel purchased the device in 1804, and exhibited it first in Europe and in 1826 in America, the chess masters who secretly operated it included Johann Allgaier, Hyacinthe Henri Boncourt, Aaron Alexandre, William Lewis, Jacques Mouret, and William Schlumberger. In 1818, for a short time while Boncourt was the operator of the Turk, he caught the flu and his chess performance was rather poor, and he could not control his coughing which could be heard by spectators, creating a certain embarrassment to Mälzel who owned the machine. For this reason Mälzel added some noisy gears to the Turk, which had no other purpose than to cover any noise that might come from the operator.

One of the most insightful commentators on The Turk was the American writer, poet, editor, literary critic, and magazinist Edgar Allan Poe. who in April 1836 published in the Southern Literary Messenger issued from Richmond, Virginia "Maelzel's Chess Player." In this article on automata Poe provided a very closely reasoned explanation of the concealed human operation of von Kempelen's Turk, which Poe had seen exhibited in Richmond by Maelzel a few weeks earlier. 

Poe also briefly compared von Kempelen's Turk to Babbage's Difference Engine No. 1, which was limited to the computation of short mathematical tables, suggesting essentially that if the Turk was fully automated and had the ability to use the results of one logical operation to make a decision about the next one—what was later called "conditional branching" —it would be far superior to Babbage's machine. This feature Babbage later designed into his Analytical Engine

Here is Poe's comparison of the two machines:

"But if these machines were ingenious, what shall we think of the calculating machine of Mr. Babbage? What shall we think of an engine of wood and metal which can not only compute astronomical and navigation tables to any given extent, but render the exactitude of its operations mathematically certain through its power of correcting its possible errors? What shall we think of a machine which can not only accomplish all this, but actually print off its elaborate results, when obtained, without the slightest intervention of the intellect of man? It will, perhaps, be said, in reply, that a machine such as we have described is altogether above comparison with the Chess-Player of Maelzel. By no means — it is altogether beneath it — that is to say provided we assume (what should never for a moment be assumed) that the Chess-Player is a pure machine, and performs its operations without any immediate human agency. Arithmetical or algebraical calculations are, from their very nature, fixed and determinate. Certain data being given, certain results necessarily and inevitably follow. These results have dependence upon nothing, and are influenced by nothing but the data originally given. And the question to be solved proceeds, or should proceed, to its final determination, by a succession of unerring steps liable to no change, and subject to no modification. This being the case, we can without difficulty conceive the possibility of so arranging a piece of mechanism, that upon starting it in accordance with the data of the question to be solved, it should continue its movements regularly, progressively, and undeviatingly towards the required solution, since these movements, however complex, are never imagined to be otherwise than finite and determinate. But the case is widely different with the Chess-Player. With him there is no determinate progression. No one move in chess necessarily follows upon any one other. From no particular disposition of the men at one period of a game can we predicate their disposition at a different period. Let us place the first move in a game of chess, in juxta-position with the data of an algebraical question, and their great difference will be immediately perceived. From the latter — from the data — the second step of the question, dependent thereupon, inevitably follows. It is modelled by the data. It must be thus and not otherwise. But from the first move in the game of chess no especial second move follows of necessity. In the algebraical question, as it proceeds towards solution, the certainty of its operations remains altogether unimpaired. The second step having been a consequence of the data, the [column 2:] third step is equally a consequence of the second, the fourth of the third, the fifth of the fourth, and so on, and not possibly otherwise, to the end. But in proportion to the progress made in a game of chess, is the uncertainty of each ensuing move. A few moves having been made, no step is certain. Different spectators of the game would advise different moves. All is then dependent upon the variable judgment of the players. Now even granting (what should not be granted) that the movements of the Automaton Chess-Player were in themselves determinate, they would be necessarily interrupted and disarranged by the indeterminate will of his antagonist. There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage, and if we choose to call the former a pure machine we must be prepared to admit that it is, beyond all comparison, the most wonderful of the inventions of mankind. Its original projector, however, Baron Kempelen, had no scruple in declaring it to be a "very ordinary piece of mechanism — a bagatelle whose effects appeared so marvellous only from the boldness of the conception, and the fortunate choice of the methods adopted for promoting the illusion." But it is needless to dwell upon this point. It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, a priori. The only question then is of the manner in which human agency is brought to bear. Before entering upon this subject it would be as well to give a brief history and description of the Chess-Player for the benefit of such of our readers as may never have had an opportunity of witnessing Mr. Maelzel's exhibition."

Even though the machine intelligence exhibited by the Turk was an illusion, von Kempelen's automaton was much later viewed as an analog to efforts in computer chess and artificial intelligence.

(This entry was last revised on 12-27-2014.)

View Map + Bookmark Entry

1850 – 1875

Samuel Butler Novel "Erewhon" Describes Artificial Consciousness 1872

In 1872 Erewhon: or, Over the Range, a satirical utopian novel by the English writer Samuel Butler, was published anonymously in London. A notable aspect of this satire on aspects of Victorian society, expanded from letters that Butler originally published in the New Zealand newspaper, The Press, was that Erewhonians believed that machines were potentially dangerous and that Erewhonian society had undergone a revolution that destroyed most mechanical inventions. In the section of Butler's satire called "The Book of the Machines" Butler appears to have imagined the possiblity of machine consciousness, or artificial consciousness, and that machines could replicate themselves

View Map + Bookmark Entry

1910 – 1920

Torres y Quevedo Invents the First Decision-Making Automaton 1912 – 1915

In 1912 Spanish civil engineer and mathematician, and Director of the Laboratory of Applied Mechanics at the Ateneo Científico, Literario y Artístico de MadridLeonardo Torres y Quevedo built the first decision-making automaton — a chess-playing machine that pit the machine’s rook and king against the king of a human opponent.  Torres's machine, which he called El Ajedrecista (The Chessplayer) used electromagnets under the board to "play" the endgame rook and king against the lone king.

"Well, not precisely play. But the machine could, in a totally unassisted and automated fashion, deliver mate with King and Rook against King. This was possible regardless of the initial position of the pieces on the board. For the sake of simplicity, the algorithm used to calculate the positions didn't always deliver mate in the minimum amount of moves possible, but it did mate the opponent flawlessly every time. The machine, dubbed El Ajedrecista (Spanish for “the chessplayer”), was built in 1912 and made its public debut during the Paris World Fair of 1914, creating great excitement at the time. It used a mechanical arm to make its moves and electrical sensors to detect its opponent's replies." (http://www.chessbase.com/newsprint.asp?newsid=1799, accessed 10-31-2012).

The implications of Torres's machines were not lost on all observers. On November 6, 1915 Scientific American magazine in their Supplement 2079 pp. 296-298 published an illustrated article entitled "Torres and his Remarkable Automatic Devices. He Would Substitute Machinery for the Human Mind."

View Map + Bookmark Entry

1920 – 1930

Von Neumann Invents the Theory of Games 1928

In 1928 Hungarian-American mathematician, physicist, economist and polymath John von Neumann then working at Humboldt-Universität zu Berlin, published "Zur Theorie der Gesellschaftsspiele" in Mathematische Annalen, 100, 295–300. This paper "On the Theory of Parlor Games" propounded the minimax theorem, inventing the theory of games.

View Map + Bookmark Entry

1930 – 1940

Alan Turing Publishes "On Computable Numbers," Describing What Came to be Called the "Turing Machine" November 30, 1936

In issues dated November 30 and December 23, 1936 of the Proceedings of the London Mathematical Society English mathematician Alan Turing published "On Computable Numbers", a mathematical description of what he called a universal machine— an astraction that could, in principle, solve any mathematical problem that could be presented to it in symbolic form. Turing modeled the universal machine processes after the functional processes of a human carrying out mathematical computation. In the following issue of the same journal Turing published a two page correction to his paper.

Undoubtedly the most famous theoretical paper in the history of computing, "On Computable Numbers" is a mathematical description an imaginary computing device designed to replicate the mathematical "states of mind" and symbol-manipulating abilities of a human computer. Turing conceived of the universal machine as a means of answering the last of the three questions about mathematics posed by David Hilbert in 1928: (1) is mathematics complete; (2) is mathematics consistent; and (3) is mathematics decidable.

Hilbert's final question, known as the Entscheidungsproblem, concerns whether there exists a defiinite method—or, in the suggestive words of Turing's teacher Max Newman, a "mechanical process"—that can be applied to any mathematical assertion, and which is guaranteed to produce a correct decision as to whether that assertion is true. The Czech logician Kurt Gödel had already shown that arithmetic (and by extension mathematics) was both inconsistent and incomplete. Turing showed, by means of his universal machine, that mathematics was also undecidable.

To demonstrate this, Turing came up with the concept of "computable numbers," which are numbers defined by some definite rule, and thus calculable on the universal machine. These computable numbers, "would include every number that could be arrived at through arithmetical operations, finding roots of equations, and using mathematical functions like sines and logarithms—every number that could possibly arise in computational mathematics" (Hodges, Alan Turing: The Enigma [1983] 100). Turing then showed that these computable numbers could give rise to uncomputable ones—ones that could not be calculated using a definite rule—and that therefore there could be no "mechanical process" for solving all mathematical questions, since an uncomputable number was an example of an unsolvable problem.

From 1936 to 1938 Mathematician Alan Turing spent more than a year at Princeton University studying mathematical logic with Alonzo Church, who was pursuing research in recursion theory. In August 1936 Church gave Turing's idea of a "universal machine" the name "Turing machine." Church coined the term in his relatively brief review of "On Computable Numbers." With regard to Turing's proof of the unsolvability of Hilbert's Entscheidungsproblem, Church acknowledged that "computability by a Turing machine. . . has the advantage of making the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately—i.e. without the necessity of proving elementary theorems." Church working independently of Turing, had arrived at his own answer to the Entscheidungsproblem a few months earlier. Norman, From Gutenberg to the Internet, Reading 7.2.  

Independently of Alan Turing, mathematician and logician Emil Post of the City College of New York developed, and published in October 1936, a mathematical model of computation that was essentially equivalent to the Turing machine. Intending this as the first of a series of models of equivalent power but increasing complexity, he titled his paper Formulation 1. This model is sometime's called "Post's machine" or a Post-Turing machine.

In 1937 Turing and John von Neumann had their first discussions about computing and what would later be called “artificial intelligence” (AI). Always interested in practical applications of computing as well as theory, also while at Princeton, in 1937, believing that war with Germany was inevitable, Turing built in an experimental electromechanical cryptanalysis machine capable of binary multiplication in a university machine shop. After returning to England, on September 4, 1939, the day after Britain and France declared war on Germany, Turing reported to the Government Code and Cypher SchoolBletchley Park, in the town of Bletchley, England.

♦ In June 2013 it was my pleasure to purchase the famous copy of the offprint of "On Computable Numbers" along with the offprint of "On Computable Numbers . . . A Correction" that Turing presented to the English philosopher R. B. Braithwaite. One of very few copies in existence of the offprint, and possibly the only copy in private hands, the offprint sold for £205,000.  It was a price record for any offprint on a scientific or medical subject, for any publication in the history of computing, and probably the highest price paid for any scientific publication issued in the twentieth century.

Norman, From Gutenberg to the Internet, Reading 7.1. Hook & Norman, Origins of Cyberspace (2002) No. 394. 

(This entry was last revised on 12-31-2014.)

View Map + Bookmark Entry

1940 – 1950

McCulloch & Pitts Publish the First Mathematical Model of a Neural Network 1943

In 1943 American neurophysiologist and cybernetician of the University of Illinois at Chicago Warren McCulloch and self-taught logician and cognitive psychologist Walter Pitts published “A Logical Calculus of the ideas Imminent in Nervous Activity,” describing the McCulloch - Pitts neuron, the first mathematical model of a neural network.

Building on ideas in Alan Turing’s “On Computable Numbers”, McCulloch and Pitts's paper provided a way to describe brain functions in abstract terms, and showed that simple elements connected in a neural network can have immense computational power. The paper received little attention until its ideas were applied by John von Neumann, Norbert Wiener, and others.

View Map + Bookmark Entry

Von Neumann & Morgenstern Issue "The Theory of Games and Economic Behavior" 1944

In 1944 mathematician, physicist, and economist John von Neumann, and economist Oskar Morgenstern published The Theory of Games and Economic Behavior in Princeton at the University Press.

Quantitative mathematical models for games such as poker or bridge at one time appeared impossible, since games like these involve free choices by the players at each move, and each move reacts to the moves of other players. However, in the 1920s John von Neumann single-handedly invented game theory, introducing the general mathematical concept of "strategy" in a paper on games of chance (Mathematische Annalen 100 [1928] 295-320). This contained the proof of his "minimax" theorem that says "a strategy exists that guarantees, for each player, a maximum payoff assuming that the adversary acts so as to minimize that payoff." The "minimax" principle, a key component of the game-playing computer programs developed in the 1950s and 1960s by Arthur Samuel, Allen Newell, Herbert Simon, and others was more fully articulated and explored in The Theory of Games and Economic Behavior, co-authored by von Neumann and Morgenstern.

Game theory, which draws upon mathematical logic, set theory and functional analysis, attempts to describe in mathematical terms the decision-making strategies used in games and other competitive situations. The Von Neumann-Morgenstern theory assumes (1) that people's preferences will remain fixed throughout; (2) that they will have wide knowledge of all available options; (3) that they will be able to calculate their own best interests intelligently; and (4) that they will always act to maximize these interests. Attempts to apply the theory in real-world situations have been problematical, and the theory has been criticized by many, including AI pioneer Herbert Simon, as failing to model the actual decision-making process, which typically takes place in circumstances of relative ignorance where only a limited number of options can be explored.

Von Neumann revolutionized mathematical economics. Had he not suffered an early death from cancer in 1957, most probably he would have received the first Nobel Prize in economics. (The first Nobel prize in economics was awarded in 1969; it cannot be awarded posthumously.) Several mathematical economists influenced by von Neumann's ideas later received the Nobel Prize in economics. 

Hook & Norman, Origins of Cyberspace (2002) no. 953.

View Map + Bookmark Entry

One of the First Studies of Pattern Recognition 1947

In 1947 American logician Walter Pitts and psychiatrist neuroscientist Warren S. McCulloch published "How we Know Universals. The Perception of Auditory and Visual Forms," Bulletin of Mathematical Biophysics 9 (1947) 127-147. In this expansion of McCulloch and Pitts' "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943) Pitts and McCulloch implemented their notions by showing how the anatomy of the cerebral cortex might accommodate the identification of form independent of its angular size in the image, and other such operations in perception.

View Map + Bookmark Entry

Alan Turing's Contributions to Artificial Intelligence July 1948 – 1950

In July and August 1948 Alan Turing wrote a report for the National Physical Laboratory entitled Intelligent Machinery. In the report he stated that a thinking machine should be given the blank mind of an infant instead of an adult mind filled with opinions and ideas. The report contained an early discussion of neural networks. Turing estimated that it would take a battery of programmers fifty years to bring this learning machine from childhood to adult mental maturity. The report was not published until 1968.

In September 1948 Turing joined the computer project at Manchester University as Deputy Director and chief programmer.

In 1950 Turing published Computing Machinery and Intelligence, in which he described the “Turing Test" for determining whether a machine is “intelligent.”

"Turing predicted that machines would eventually be able to pass the test; in fact, he estimated that by the year 2000, machines with around 100 MB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase "thinking machine" contradictory. (In practice, from 2009-2012, the Loebner Prize chatterbot contestants only managed to fool a judge once, and that was only due to the human contestant pretending to be a chatbot.) He further predicted that machine learning would be an important part of building powerful machines, a claim considered plausible by contemporary researchers in artificial intelligence.

"In a 2008 paper submitted to 19th Midwest Artificial Intelligence and Cognitive Science Conference, Dr. Shane T. Mueller predicted a modified Turing Test called a "Cognitive Decathlon" could be accomplished within 5 years.

"By extrapolating an exponential growth of technology over several decades, futurist Ray Kurzweil predicted that Turing test-capable computers would be manufactured in the near future. In 1990, he set the year around 2020. By 2005, he had revised his estimate to 2029.

"The Long Bet Project Bet Nr. 1 is a wager of $20,000 between Mitch Kapor (pessimist) and Ray Kurzweil (optimist) about whether a computer will pass a lengthy Turing Test by the year 2029. During the Long Now Turing Test, each of three Turing Test Judges will conduct online interviews of each of the four Turing Test Candidates (i.e., the Computer and the three Turing Test Human Foils) for two hours each for a total of eight hours of interviews. The bet specifies the conditions in some detail" (Wikipedia article on Turing Test, accessed 06-15-2014).

View Map + Bookmark Entry

Donald Hebb Formulates the "Hebb Synapse" in Neuropsychological Theory 1949

In 1949 Canadian psychologist Donald O. Hebb, then professor at McGill University, issued The Organization of Behavior. A Neuropsychological Theory. This work contained the first explicit statement of the physiological learning rule for synaptic modification that became known as the "Hebb synapse." His theory became known as Hebbian theory, Hebb's rule, Hebb's postulate, and cell assembly theory. Models which follow this theory are said to exhibit "Hebbian learning." As Hebb wrote in the book: "When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased."

"This is often paraphrased as 'Neurons that fire together wire together.' It is commonly referred to as Hebb's Law.

"The combination of neurons which could be grouped together as one processing unit, Hebb referred to as 'cell-assemblies'. And their combination of connections made up the ever-changing algorithm which dictated the brain's response to stimuli.

"Not only did Hebb's model for the working of the mind influence how psychologists understood the processing of stimuli within the mind but also it opened up the way for the creation of computational machines that mimicked the biological processes of a living nervous system. And while the dominant form of synaptic transmission in the nervous system was later found to be chemical, modern artificial neural networks are still based on the transmission of signals via electrical impulses that Hebbian theory was first designed around" (Wikipedia article on Hebbian theory, accessed 01-01-2014).

View Map + Bookmark Entry

1950 – 1960

Shannon Issues the First Technical Paper on Computer Chess March 1950

In March 1950 Claude Shannon of Bell Labs, Murray Hill, New Jersey, published "Programming a Computer for Playing Chess," Philosophical Magazine, Ser.7, 41, no. 314. This was the first technical paper on computer chess; however, the paper was entirely theoretical; it contained no references to Shannon programming an actual computer to play a game.

View Map + Bookmark Entry

Possibly the First Artificial Self-Learning Machine January 1952

In January 1952 Marvin Minsky, a graduate student at Harvard University Psychological Laboratories implemented the SNARC (Stochastic Neural Analog Reinforcement Calculator). This randomly connected network of Hebb synapses was the first connectionist neural network learning machine that when "rewarded" facilitated recently-used pathways. The SNARC, implemented using vacuum tubes, was possibly the first artificial self-learning machine.

Minsky, A Neural-Analogue Calculator Based upon a Probability Model of Reinforcement," Harvard University Psychological Laboratories, Cambridge, Massachusetts, January 8, 1952.  This reference came from Minsky's bibliography of his selected publications on his website in December 2013. He did not include this in his bibliography on AI in Computers and Thought (1963), leading me to believe that some or all of the information may have been included in his Princeton Ph.D. dissertation, Neural Nets and the Brain Model Problem (1954). That was also unpublished.

View Map + Bookmark Entry

To What Extent Can Human Mental Processes be Duplicated by Switching Circuits? February 1953

In 1953 Bell Laboratories engineer John Meszar published "Switching Systems as Mechanical Brains," Bell Laboratories Record XXXI (1953) 63-69.

This paper, written in the earliest days of automatic switching systems, when few electronic computers existed, and, for the most part, human telephone operators served as "highly intelligent and versatile switching systems," raised the question of whether certain aspects of human thought are computable and others are not. Meszar argued for "the necessity of divorcing certain mental operations from the concept of thinking," in order to "pave the way for ready acceptance of the viewpoint that automatic systems can accomplish many of the functions of the human brain." 

"We are faced with a basic dilemma; we are forced either to admit the possibility of mechanized thinking, or to restrict increasingly our concept of thinking. However, as is apparent from this article, many of us do not find it hard to make the choice. The choice is to reject the possibility of mechanized thinking but to admit readily the necessity for an orderly declassification of many areas of mental effort from the high level of thinking. Machines will take over such areas, whether we like it or not.

"This declassification of wide areas of mental effort should not dismay any one of us. It is not an important gain for those who are sure that even as machines have displaced muscles, they will also take over the functions of the 'brain.' Neither is it a real loss for those who feel that there is something hallowed about all functions of the human mind. What we are giving up to the machines— some of us gladly, others reluctantly— are the uninteresting flat lands of routine mental chores, tasks that have to be performed according to rigorous rules. The areas we are holding unchallenged are the dominating heights of creative mental effort, which comprise the ability to speculate, to invent, to imagine, to philosophize, the dream better ways for tomorrow than exist today. These are the mental activities for which rigorous rules cannot be formulated— they constitute real thinking, whose mechanization most of us cannot conceive" (p. 69).

View Map + Bookmark Entry

The First Artificial Intelligence Program 1955 – July 1956

During 1955 and 1956 computer scientist and cognitive psychologist Allen Newell, political scientist, economist and sociologist Herbert A. Simon, and systems programmer John Clifford Shaw, all working at the Rand Corporation in Santa Monica, California, developed the Logic Theorist, the first program deliberately engineered to mimic the problem solving skills of a human being. They decided to write a program that could prove theorems in the propositional calculus like those in Principia Mathematica by Alfred North Whitehead and Bertrand Russell. As Simon later wrote,

"LT was based on the system of Principia mathematica, largely because a copy of that work happened to sit in my bookshelf. There was no intention of making a contribution to symbolic logic, and the system of Principia was sufficiently outmoded by that time as to be inappropriate for that purpose. For us, the important consideration was not the precise task, but its suitability for demonstrating that a computer could discover problem solutions in a complex nonnumerical domain by heuristic search that used humanoid heuristics" (Simon,"Allen Newell: 1927-1992," Annals of the History of Computing 20 [1998] 68).

The collaborators wrote the first version of the program by hand on 3 x 5 inch cards. As Simon recalled:

"In January 1956, we assembled my wife and three children together with some graduate students. To each member of the group, we gave one of the cards, so that each one became, in effect, a component of the computer program ... Here was nature imitating art imitating nature" (quoted in the Wikipedia article Logic Theorist, accessed 01-02-2013). 

The team showed that the program could prove theorems as well as a talented mathematician. Eventually Shaw was able to run the program on the computer at RAND's Santa Monica facility. It proved 38 of the first 52 theorems in Principia Mathematica. For Theorem 2.85 the Logic Theorist surpassed its inventors’ expectations by finding a new and better proof. This was the “the first foray by artificial intelligence research into high-order intellectual processes” (Feigenbaum and Feldman, Computers and Thought [1963]).

Newell and Simon first described the Logic Theorist in Rand Corporation report P-868 issued on June 15, 1956, entitled The Logic Theory Machine. A Complex Information Processing System. (For some reason the only online version of this report available in January 2014 began on p. 25; however, the text available included the complete program.) The report was first officially published in September, 1956 under the same title in IRE Transactions on Information Theory IT-2, 61-79.

Newell and Simon demonstrated the program at the Dartmouth Summer Session on Artificial Intelligence held during the summer of 1956. 

Hook & Norman, Origins of Cyberspace (2002) no. 815.

View Map + Bookmark Entry

Pioneer Program in Pattern Recognition 1955

In "Self Pattern recognition and modern computers," Proceedings of the Western Joint Computer Conference (1955) 91–93, English artificial intelligence researcher Oliver Selfridge described one of the first attempts to devise an optical character-reading program by “teaching” the computer to extract the significant features of a given letter-pattern from a background of irrelevant detail.

“This involved getting the machine to accept slightly different versions of the same typed symbol as exactly that—different versions of the same symbol. In attacking this problem Selfridge launched a project that continues to absorb energy, the project of making machines recognize certain slightly different configurations of elements as constituting the same pattern (or, looking at it in another way, getting the machine to recognize the same identities as the human being). Visual pattern recognition was Selfridge’s particular concern, but, in its general form, pattern recognition is a fundamental topic in almost all AI projects” (Pratt, Thinking Machines. The Evolution of Artificial Intelligence [1987] 204).Selfridge, a native of England, matriculated at MIT at the age of fourteen. He published a paper on neural nets in 1948 (Archives of the Institute of Cardiology of  Mexico [1948]: 177–87) and in 1955 organized with Marvin Minsky the first conference on AI.

Selfridge, a native of England, matriculated at MIT at the age of fourteen. He published a paper on neural nets in 1948 (Archives of the Institute of Cardiology of  Mexico [1948]: 177–87) and in 1955 organized with Marvin Minsky the first summer conference on AI

In an interview, "Oliver Selfridge—in from the start, IEEE Expert 11, no. 5 (1996) 15-17, Selfridge discussed his early involvement with artificial intelligence:

"Q: How did you become interested in AI?

Oliver Selfridge: It was at MIT, a long time before the Dartmouth Conference, and I was studying mathematics under Norbert Wiener. By luck, of which I’ve had a great deal in my life, I was introduced to Walter Pitts, who was working with Warren McCulloch on a topic they called theoretical neurophysiology. I had studied logic, and through Walter, Warren, and Norbert got introduced to neural nets at that time. I went to the Pacific at the end of World War II with the United States Navy and came back to graduate school, again at MIT. Norbert was then writing Cybernetics, and Walter and I were helping him with various aspects of it. As I studied mathematics (my original field) and interacted with Norbert, Warren, and Walter, I began to be interested in the specific processing that neural nets could do and even more interested in the general properties of learning.

At this point McCulloch and Pitts had written the first two AI papers (although it wasn’t called that). The first showed that a neural net could work out certain kinds of problems, such as pattern recognition in the general cognitive sense, and the second discussed acquisition of patterns (how we know “universals”). These two works followed all the glorious mathematics that Turing and Gödel had done in the twenties and thirties about computability and Turing machines. This mathematics was, of course, the beginning of a formal description of what computability meant. Johnny Von Neumann visited us at MIT occasionally, so again by pure luck, before the age of twenty, I had been introduced to McCulloch, Pitts, Wiener, and Von Neumann." 

Hook & Norman, Origins of Cyberspace (2002) No. 877.

(This entry was last revised on 04-19-2014.)

View Map + Bookmark Entry

The First Book on Machine Translation 1955

In 1955 William N. Locke of the Department of Modern Languages at MIT, and English electrical engineer, computer scientist and machine translation pioneer Andrew Donald Booth issued Machine Translation of Languages, the first book on machine translation. This was an anthology of essays by fourteen of the earliest pioneers in the field. The foreword to the book was by Warren Weaver, who largely set research on machine translation in motion with his July 1949 memorandum Translation, republished as the first chapter in the volume. The editors began the book with an historical introduction that they wrote jointly, and ended it with an annotated bibliography of 46 references that represented virtually the entire literature on the subject at the time. The history as the authors saw it, began with discussions by Booth and Weaver in 1946 in which Weaver thought that cryptanalysis techniques developed in WWII could be adapted for translation, while Booth thought that, given the extremely limited memory capacity of the earliest machines, some kind of electronic dictionaries could be created.

A review of the book by Martin Joos published in Language in 1956 summarized the primitive state of the art at the time, pointing out that in 1956 human translation remained cheaper and faster— not to say more accurate— than machine translation. I quote its first paragraph:

"M [achine] T[ranslation] is today both a dream and a reality. The dream is that some day electronic computing machines will do our translation for us. The reality is that MT is being done currently, experimentally and with low-grade results and that dozens of earnest workers are also trying, of course, to expand and sharpen their methods. Nowadays it is not usually a  computer that performs the MT work it is a person (or crew) duplicating with paper and pencil the very procedures that the computer would use. The procedures are rigidly controlled, and it is known that a computer could be 'programmed' to follow them. But in the experimental and development stage of MT it is not only cheaper to do the work by hand; it is also faster."

View Map + Bookmark Entry

"The Design of Machines to Simulate the Behavior of the Human Brain" March 1955 – December 1956

At the 1955 Institute of Radio Engineers (IRE) Convention held in New York in March the Professional Group on Electronic Computers (PGEC) sponsored a symposium on "The Design of Machines to Simulate the Behavior of the Human Brain." The four panel members were Warren McCulloch of MIT, Anthony G. Oettinger of Harvard, Otto H. Schmitt of the University of Minnesota, and Nathaniel Rochester of IBM. The moderator was Howard E. Tompkins, then of Burroughs Corporation.

After the panel members read prepared statements, and a brief discussion, a group of invited questioners cross-examined the panel members. The invited questioners were Marvin Minsky, then of Harvard, Morris Rubinoff of the University of Pennsylvania, Elliot L. Gruenberg of the W. L. Maxson Corporation, John Mauchly, of what was then Remington Rand, M. E. Maron of IBM, and Walter Pitts of MIT. The transcript of the symposium was edited by the speakers with the help of Howard Tompkins, and published in the IRE Transactions on Electronic Computers, December 1956, 240-255.

From the transcript of the symposium, which was available online when I wrote this entry in April 2014, we see that many of the issues of current interest in 2014 were being discussed in 1955-56. McCulloch began the symposium with the following very quotable statement:

"Since nature has given us the working model, we need not ask, theoretically, whether machines can be built to do what brains can do with information. But it will be a long time before we can match this three-pint, three-pound, twenty-five-watt computer, with its memory storing 10¹³ or 10 [to the 15th power] bits with a mean half-life of half a day and successful regeneration of 5 per cent of its traces for sixty years, operating continuously wih its 10 [to the 10th power] dynamically stable and unreplaceable relays to preserve itself by governing its own activity and stabilizing the state of the whole body and its relation to its world by reflexive and appetitive negative feedback."

As I read through this discussion, I concluded that it was perhaps the best summary of ideas on the computer and the human brain in 1955-1956. As quoting it in its entirety would have been totally impractical, I instead listed the section headings and refer those interested to the original text:

McCulloch: "Brain," A Computer With Negative Feedback

Oettinger: Contrasts and Similarities

Rochester: Simulation of Brain Action on Computers

Schmitt: The Brain as a Different Computer


Chemical Action, Too

Cell Assemblies

Why Build a Machine "Brain"?

Is Systematicness Undesirable?

Growth as a Type of Learning

What Does Simultation Prove?

The Semantics of Reproduction

Where is the Memory?

"Distributed Memories"

"Memory Half-Life"

Analog vs. Digital

Speed vs. Equipment

The Neurophysiologists' Contribution

Pattern Recognition

Creative Thinking by Machines?

What Model Do We Want?

View Map + Bookmark Entry

Coining the Term, Artificial Intelligence August 31, 1955

On August 31, 1955 John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon invited participants to a summer session at Dartmouth College to conduct research on what they called Artificial Intelligence (AI), thereby coining the term. (See Reading 11.5.)

View Map + Bookmark Entry

Intelligence Amplification by Machines 1956

In 1956 English psychiatrist and cybernetician W[illiam] Ross Ashby wrote of intelligence amplification by machines in his book, An Introduction to Cybernetics.

View Map + Bookmark Entry

Semantic Networks for Machine Translation 1956

In 1956 Richard H. Richens of the Cambridge Language Research Unit invented semantic nets for computing by creating semantic networks for machine translation of natural languages.

Richens, "General program for mechanical translation between any two languages via an algebraic interlingua [Abstract]" In: Report on Research: Cambridge Language Research Unit. Mechanical Translation 3 (2), November 1956; p. 37.

Richens, "Preprogramming for mechanical translation," Mechanical Translation 3 (1), July 1956, 20–25

View Map + Bookmark Entry

Chomsky's Hierarchy of Syntactic Forms September 1956

In September 1956 American linguist, philosopher, cognitive scientist, and activist Noam Chomsky published "Three Models for the Description of Language" in IRE Transactions on Information Theory IT-2, 113-24. In the paper Chomsky introduced two key concepts, the first being “Chomsky’s hierarchy” of syntactic forms, which was widely applied in the construction of artificial computer languages.

“The Chomsky hierarchy places regular (or linear) languages as a subset of the context-free languages, which in turn are embedded within the set of context-sensitive languages also finally residing in the set of unrestricted or recursively enumerable languages. By defining syntax as the set of rules that define the spatial relationships between the symbols of a language, various levels of language can be also described as one-dimensional (regular or linear), two-dimensional (context-free), three-dimensional (context sensitive) and multi-dimensional (unrestricted) relationships. From these beginnings, Chomsky might well be described as the ‘father of formal languages’ ” (Lee, Computer Pioneers [1995] 164). 

The second concept Chomsky presented here was his transformational-generative grammar theory, which attempted to define rules that can generate the infinite number of grammatical (well-formed) sentences possible in a language, and seeks to identify rules (transformations) that govern relations between parts of a sentence, on the assumption that beneath such aspects as word order a fundamental deep structure exists. As Chomsky expressed it in his abstract of the present paper,

"We investigate several conceptions of linguistic structure to determine whether or not they can provide simple and “revealing” grammars that generate all of the sentences of English and only these. We find that no finite-state Markov process [a random process whose future probabilities are determined by its most recent values] that produces symbols with transition from state to state can serve as an English grammar. We formalize the notion of “phrase structure” and show that this gives us a method for describing language which is essentially more powerful. We study the properties of a set of grammatical transformations, showing that the grammar of English is materially simplified if phrase-structure is limited to a kernel of simple sentences from which all other sentences are constructed by repeated transformation, and that this view of linguistic structure gives a certain insight into the use and understanding of language" (p. 113).

Minsky, "A Selected Descriptor-Indexed Bibliography to the Literature on Artificial Intelligence" in Feigenbaum & Feldman eds., Computers and Thought (1963) 453-523, no. 484. Hook & Norman, Origins of Cyberspace (2002) no. 531.

View Map + Bookmark Entry

The First Paper on Machine Learning 1957

In 1957 American mathematician and researcher in artificial intelligence Ray Solomonoff published "An Inductive Inference Machine". IRE Convention Record, Section on Information Theory, Part 2 (1957) 56-62. This was the first paper written on machine learning. It emphasized the importance of training sequences, and the use of parts of previous solutions to problems in constructing trial solutions to new problems. Solomonoff presented an early version of this paper at the 1956 Dartmouth Summer Research Conference on Artificial Intelligence.  In March 2012 a copy of that version was available at this link.

View Map + Bookmark Entry

Chomsky's Syntactic Structures 1957

In 1957 Noam Chomsky's Syntactic Structures was published in S-Gravenhage (The Hague), Netherlands, by Mouton & Co. That it did not initially find an American publisher might have been reflective of the advanced nature of the contents. Through its numerous printings Syntactic Structures, a small book of 116 pageswas the vehicle through which Chomsky's innovative ideas first became more widely known.

Chomsky’s text was an expansion of the ideas first expressed in his “Three Models for the Description of Language," in particular the concept of transformational grammar. The cognitive scientist David Marr, who developed a general account of information-processing systems, described Chomsky’s theory of transformation grammar as a top-level computational theory, in the sense that it deals with the goal of a computation, why it is appropriate, and the logic of the strategy used to carry it out (Anderson and Rosenfeld, Neurocomputing: Foundations of Research [1988] 470–72). Chomsky’s work had profound influence in the fields of linguistics, philosophy, psychology, and artificial intelligence. 

Hook & Norman, Origins of Cyberspace (2002) no. 532.

View Map + Bookmark Entry

Von Neumann's "The Computer and the Brain" 1958

Because of failing health, John von Neumann did not finish his last book, The Computer and the BrainThe book, issued posthumously in 1958, was a published version of the Silliman Lectures which von Neumann was invited to deliver at Yale in 1956. Although von Neumann prepared the lectures by March 1956, he was already too sick to travel to New Haven and could not deliver the lectures as scheduled. He continued to work on the manuscript until his death on February 8, 1957. The manuscript remained unfinished, as his widow Klara von Neumann explained in her preface to the posthumous edition. 

Von Neumann's 82 page essay was divided into two parts. The first part discussed the computer: its procedures, control mechanisms, and other characteristics. The second part focused on the brain, systematically comparing the operations of the brain with what was then state-of-the-art in computer science. In what seems to have been the groundwork for a third part—but it was not organized as a separate part—von Neumann drew some conclusions from the comparison with respect to the role of code and language. Von Neumann wrote that "A deeper mathematical study of the nervous system may alter our understanding of mathematics and logic."

View Map + Bookmark Entry

A Model for Learning and Adaptation to a Complex Environment 1958

In 1958 English-American artificial intelligence pioneer Oliver Selfridge of MIT published "Pandemonium: A Paradigm for Learning," Mechanisation of Thought Processes: Proceedings of a Symposium Held at the National Physical Laboratory on 24th, 25th, 26th and 27th November 1958 (1959) 511–26. In it he proposed a collection of small components dubbed “demons” that together would allow machines to recognize patterns, and might trigger subsequent events according to patterns they recognized. This model of learning and adaptation to a complex environment based on multiple independent processing systems was influential in psychology as well as neurocomputing and artificial intelligence.

Hook & Norman, Origins of Cyberspace (2002) no. 878.

View Map + Bookmark Entry

Game Tree Pruning October 1958

In October 1958 Allan Newell, Clifford Shaw, and Herbert Simon invented game tree pruning, an artificial intelligence technique.

View Map + Bookmark Entry

The Perceptron November 1958 – 1960

In November 1958 Frank Rosenblatt invented the Perceptron, or Mark I, at Cornell University. Completed in 1960, this was the first computer that could learn new skills by trial and error, using a type of neural network that simulated human thought processes.

View Map + Bookmark Entry

The First International Symposium on Artificial Intelligence November 24 – November 27, 1958

From November 24 to 27, 1958 the National Physical Laboratory at Teddington, England held the first international symposium on artificial intelligence, calling it Mechanisation of Thought Processes. 

The proceedings were published in 1959 by Her Majesty's Stationery Office in London as a two volume set nearly 1000 pages long, also called Mechanisation of Thought Processes. In December 2013 volume one was available from aitopics.org at this link. Volume two was available from the same site at this link.

At this conference John McCarthy delivered his paper Programs with Common Sense.(See Reading 11.6.)

View Map + Bookmark Entry

The First Digital Poetry 1959

In 1959 German computer scientist Theo Lutz from Hochschule Esslingen created the first digital poetry using a text-generating program called “Stochastiche Text” written for the ZUSE Z22 computer. The program consisted of only 50 commands but could theoretically generate over 4,000,000 sentences.

Working with his teacher, Max Bense, one of the earliest theorists of computer poetry, Lutz used a random number generator to create texts where key words were randomly inserted within a set of logical constants in order to create a syntax. The programme thus demonstrated how logical structures like mathematical systems could work with language.

Funkhouser, Prehistoric Digital Poetry: An Archaeology of Forms 1959-1995 (2007).

View Map + Bookmark Entry

One of the First Computer Models of How People Learn 1959 – 1961

For his 1960 Ph.D thesis at Carnegie Institute of Technology (Carnegie Mellon University) carried out under the supervision of Herbert A. Simon, computer scientist Edward Feigenbaum developed EPAM (Elementary Perceiver and Memorizer), a computer program designed to model elementary human symbolic learning. Feigenbaum's thesis first appeared as An Information Processing Theory of Verbal Learning, RAND Corporation Mathematics Dvisision Report P-1817, October 9, 1959. In December 2013 a digital facsimile of Feigenbaum's personal corrected copy of the thesis was available from Stanford University's online archive of Feigenbaum papers at this link.

Feigenbaum's first publication on EPAM may have been "The Simulation of Verbal Learning Behavior," Proceedings of the Western Joint Computer Conference.... May 9-11, 1961 (1961) 121-32. In December 2013 a digital facsimile of this was also available at the same link.

Hook & Norman, Origins of Cyberspace (2002) no. 598.

View Map + Bookmark Entry

The Inspiration for Artificial Neural Networks, Building Blocks of Deep Learning 1959

In 1959 Harvard neurophysiologists David H. Hubel and Torsten Wiesel, inserted a microelectrode into the primary visual cortex of an anesthetized cat. They then projected patterns of light and dark on a screen in front of the cat, and found that some neurons fired rapidly when presented with lines at one angle, while others responded best to another angle. They called these neurons "simple cells." Still other neurons, which they termed "complex cells," responded best to lines of a certain angle moving in one direction. These studies showed how the visual system builds an image from simple stimuli into more complex representations. Many artificial neural networks, fundamental components of deep learning, may be viewed as cascading models of cell types inspired by Hubel and Wiesel's observations.

For two later contributions Hubel and Wiesel shared the 1981 Nobel Prize in Physiologist or Medicine with Roger W. Sperry.

". . . firstly, their work on development of the visual system, which involved a description of ocular dominance columns in the 1960s and 1970s; and secondly, their work establishing a foundation for visual neurophysiology, describing how signals from the eye are processed by the brain to generate edge detectors, motion detectors, stereoscopic depth detectors and color detectors, building blocks of the visual scene. By depriving kittens from using one eye, they showed that columns in the primary visual cortex receiving inputs from the other eye took over the areas that would normally receive input from the deprived eye. This has important implications for the understanding of deprivation amblyopia, a type of visual loss due to unilateral visual deprivation during the so-called critical period. These kittens also did not develop areas receiving input from both eyes, a feature needed for binocular vision. Hubel and Wiesel's experiments showed that the ocular dominance develops irreversibly early in childhood development. These studies opened the door for the understanding and treatment of childhood  cataracts  and strabismus. They were also important in the study of cortical plasticity.

"Furthermore, the understanding of sensory processing in animals served as inspiration for the SIFT descriptor (Lowe, 1999), which is a local feature used in computer vision for tasks such as object recognition and wide-baseline matching, etc. The SIFT descriptor is arguably the most widely used feature type for these tasks" (Wikipedia article on David H. Hubel, accessed 11-10-2014). 

View Map + Bookmark Entry

Machines Can Learn from Past Errors July 1959

In July 1959 Arthur Lee Samuel published "Some Studies in Machine Learning Using the Game of Checkers," IBM Journal of Research and Development 3 (1959) no. 3, 210-29. In this work Samuel demonstrated that machines can learn from past errors — one of the earliest examples of non-numerical computation.

Hook & Norman, Origins of Cyberspace (2002) no. 874.

View Map + Bookmark Entry

The Beginning of Expert Systems for Medical Diagnosis July 3, 1959

"Reasoning Foundations of Medical Diagnosis," by Robert S. Ledley and Lee B. Lusted published in Science, 130, No. 3366, 9-21, on July 3, 1959 represented the beginning of the development of clinical decision support systems (CDSS) — interactive computer programs, or expert systems, designed to assist physicians and other health professionals with decision making tasks.

"Areas covered included: symbolic logicBayes’ theorem (probability), and value theory. In the article, physicians were instructed how to create diagnostic databases using edge-notched cards to prepare for a time when they would have the opportunity to enter their data into electronic computers for analysis. Ledley and Lusted expressed hope that by harnessing computers, much of physicians’ work would become automated and that many human errors could therefore be avoided.

"Within medicine, Ledley and Lusted’s article has remained influential for decades, especially within the field of medical decision making. Among its most enthusiastic readers was cardiologist Homer R. Warner, who emulated Ledley and Lusted’s methods at his research clinic at LDS Hospital in Utah. Warner’s work, in turn, shaped many of the practices and priorities of the heavily computerized Intermountain Healthcare, Inc., which was in 2009 portrayed by the Obama administration as an exemplary model of a healthcare system that provided high-quality and low-cost care.

"The article also brought national media attention to Ledley and Lusted’s work. Articles about the work of the two men ran in several major US newspapers. A small demonstration device Ledley built to show how electronic diagnosis would work was described in the New York World Telegram as a “A Metal Brain for Diagnosis,” while the New York Post ran a headline: “Dr. Univac Wanted in Surgery.” On several occasions, Ledley and Lusted explained to journalists that they believed that computers would aid physicians rather than replace them, and that the process of introducing computers to medicine would be very challenging due to the non-quantitative nature of much medical information. They also envisioned, years before the development of ARPANET, a national network of medical computers that would allow healthcare providers to create a nationally-accessible medical record for each American and would allow rapid mass data analysis as information was gathered by individual clinics and sent to regional and national computer centers" (Wikipedia article on Robert Ledley, accessed 05-03-2014.)

(This entry was last revised on 05-03-2014.)

View Map + Bookmark Entry

1960 – 1970

John McCarthy Intoduces LISP, The Language of Choice for Artificial Intelligence 1960

In 1960 artifical intelligence pioneer John McCarthy of Stanford University introduced LISP (LISt Processor), the language of choice for artificial intelligence (AI) programming.

(This entry was last revised on 03-21-2014.)

View Map + Bookmark Entry

The Johns Hopkins Beast Circa 1960

Built during the 1960s at the Applied Physics Laboratory at Johns Hopkins University, the Johns Hopkins Beast was a mobile automaton. The machine had a rudimentary intelligence and the ability to survive on its own. 

"Controlled by dozens of transistors, the Johns Hopkins University Applied Physics Lab's "Beast" wandered white hallways, centering by sonar, until its batteries ran low. Then it would seek black wall outlets with special photocell optics, and plug itself in by feel with its special recharging arm. After feeding, it would resume patrolling. Much more complex than Elsie, the Beast's deliberate coordinated actions can be compared to the bacteria hunting behaviors of large nucleated cells like paramecia or amoebae."

"The robot was cybernetic. It did not use a computer. Its control circuitry consisted of dozens of transistors controlling analog voltages. It used photocell optics and sonar to navigate. The 2N404 transistors were used to create NOR logic gates that implemented the Boolean logic to tell it what to do when a specific sensor was activated. The 2N404 transistors were also used to create timing gates to tell it how long to do something. 2N1040 Power transistors were used to control the power to the motion treads, the boom, and the charging mechanism" Wikipedia article on Johns Hopkins Beast, accessed 11-13-2013).

View Map + Bookmark Entry

Arthur C. Clarke Publishes "Dial F for Frankenstein," an Inspiration for Tim Berners-Lee 1961

In 1961 British science fiction writer, inventor and futurist Arthur C. Clarke of Sri Lanka published a short story entitled "Dial F for Frankenstein."

". . . it foretold an ever-more-interconnected telephone network that spontaneously acts like a newborn baby and leads to global chaos as it takes over financial, transportation and military systems" (John Markoff, "The Coming Superbrain," New York Times, May 24, 2009).

"The father of the internet, Sir Tim Berners-Lee, credits Clarke's short story, Dial F for Frankenstein, as an inspiration" (http://www.independent.co.uk/news/science/arthur-c-clarke-science-fiction-turns-to-fact-799519.html, accessed 05-24-2009).

View Map + Bookmark Entry

The IBM 7094 is The First Computer to Sing 1961

A recording made at Bell Labs in Murray Hill, New Jersey on an IBM 7094 mainframe computer in 1961 is the earliest known recording of a computer-synthesized voice singing a song— Daisy Bell, also known as "Bicycle Built for Two." The recording was programmed by physicist John L. Kelly Jr., and Carol Lockbaum, and featured musical accompaniment written by computer music pioneer Max  Mathews.

The science fiction novelist Arthur C. Clarke witnessed a demonstration of the piece while visiting his friend, the electric engineer and science fiction writer, John R. Pierce, who was a Bell Labs employee at the time. Clarke was so impressed that he incorporated the 7094's musical performance in the 1968 novel, and the script for the 1968 film 2001: A Space Odyssey. One of the first things that Clarke’s fictional HAL 9000 computer had learned when it was originally programmed was the song "Daisy Bell". Near the end of the story, when the computer was being deactivated, or put to sleep by astronaut Dave Bowman, it lost its mind and degenerated to singing "Daisy Bell."

(This entry was last revised on 03-21-2015.)

View Map + Bookmark Entry

"The potential contributions of computers depend upon their use by very human human beings." November 1962

In November 1962 electrical engineer David L. Johnson and clinical-social psychologist Arthur L. Kobler, both at the University of Washington, Seattle, published "The Man-Computer Relationship. The potential contributions of computers crucially depend upon their use by very human human beings," Science 138 (1962) 873-79. The introductory and concluding sections of the paper are quoted below:

"Recently Norbert Wiener, 13 years after publication of his Cybernetics, took stock of the man-computer relationship [Science 131, 1355 (1960).] He concluded, with genuine concern, that computers may be getting out of hand. In emphasizing the significance of the position of the computer in our world, Wiener comments on the crucial use of computers by the military: 'it is more than likely that the machine may produce a policy which would win a nominal victory on points at the cost of every interest we have at heart, even that of national survival.' 

"Computers are used by man; man must be considered a part of any system in which they are used. Increasingly in our business, scientific, and international life the results of data processing and computer application are, necessarily and properly, touching the individuals of our society significantly. Increasing application of computers is inevitable and requisite for the growth and progress of our society. The purpose of this article is to point out certain cautions which must be observed and certain paths which must be emphasized if the man-computer relationship is to develop to its full positive potential and if Wiener's prediction is to be proved false. In this article on the problem of decision making we set forth several concepts. We have chosen decision making as a suitable area of investigation because we see both man and machine, in all their behavior actions, constantly making decisions. We see the process of decision making as being always the same: within the limits of the field, possibilities exist from which choices are made. Moreover, there are many decisions of great significance being made in which machines are already playing an active part. For example, a military leader recently remarked, "At the heart of every defense system you will find a computer." In a recent speech the president of the National Machine Accountants Association stated that 80 to 90 percent of the executive decisions in U.S. industry would soon be made by machines. Such statements indicate a growing trend-a trend which need not be disadvantageous to human beings if they maintain proper perspective. In the interest of making the man-machine relationship optimally productive and satisfactory to the human being, it is necessary to examine the unique capabilities of both man and machine, giving careful attention to the resultant interaction within the
mixed system."


"The levels of human knowledge of the environment and the universe are increasing, and it is obviously necessary that man's ability to cope with this knowledge should increase—necessary for his usefulness and for his very survival. The processes of automation have provided a functional agent for this purpose. Successful mechanized solution of routine problems has directed attention toward the capacity of the computer to arrive at apparent or real solutions of routine-learning and special problems. Increasing use of the computer in such problems is clearly necessary if our body of knowledge and information is to serve its ultimate function. Along with such use of the computer, however, will come restrictions and cautions which have not hitherto been necessary. We find that the computer is being given responsibilities with which it is less- able- to cope than man is. It is being called on to act for man in areas where man cannot define his own ability to perform and where he feels uneasy about his own performance- where he would like a neat, well-structured solution and feels that in adopting the machine's partial solution he is closer to the "right" than he is in using his own. An aura of respectability surrounds a computer output, and this, together with the time-balance factor, makes unqualified acceptance tempting. The need for caution, then, already exists and will be much greater in the future. It has little to do with the limited ability of the computer per se, much to do with the ability of man to realistically determine when and how he must use the tremendous ability which he has developed in automation. Let us continue to work with learning machines, with definitions of meaning and 'artificial intelligence.' Let us examine these processes as 'games' with expanding values, aiming toward developing improved computer techniques as well as increasing our knowledge of human functions. Until machines can satisfy the requirements discussed, until we can more perfectly determine the functions we require of the machines, let us not call upon mechanized decision systems to act upon human systems without intervening realistic human processing. As we proceed with the inevitable development of computers and means of using them, let us be sure that careful analysis is made of all automation (either routine-direct, routine-learning, or special) that is used in systems of whichman is a part-sure that man reflects upon his own reaction to, and use of mechanization. Let us be certain that, in response to Samuel Butler's question, "May not man himself become a sort of parasite upon the machines; an affectionate machine tickling aphid?' we will always be able to answer 'No.' "

View Map + Bookmark Entry

Feigenbaum & Feldman Issue "Computers and Thought," the First Anthology on Artificial Intelligence 1963

In 1963 computer scientist and artificial intelligence researchers at the University of California at Berkeley Edward A. Feigenbaum and Julian Feldman issued Computers and Thought, the first anthology on artificial intelligence. At the time there were almost no published books on AI and no textbook; the anthology became a kind of de facto textbook by default. It was translated into Russian, Japanese, Polish and Spanish.

An unusual feature of the anthology was its reprinting of "A Selected Descriptor-Indexed Bibliography to the Literature on Artificial Intelligence" (1961) prepared by Marvin Minsky as a companion to his survey on the literature of the field entitled "Steps toward Artificial Intelligence," which was also republished in the anthology. In the bibliography of Minsky's selected publications that was available on his website in December 2013 Minsky indicated that this "may have been the first keyword-descriptor indexed bibliography."                                                                                                                         Authors represented in the anthology included Paul Armer, Carol Chomsky, Geoffrey P. E. Clarkson, Edward A. Feigenbaum. Julian Feldman, H. Gelernter, Bert F. Green, Jr., John T. Gullahorn, Jeanne E. Gullahorn, J. R. Hansen, Carl I. Hovland, Earl B. Hunt. Kenneth Laughery. Robert K. Lindsay. D. W. Loveland. Marvin Minsky. Ulric Neisser. Allen Newell. A. L. Samuel. Oliver G. Selfridge. J. C. Shaw, Herbert A. Simon, James R. Slagle, Fred M. Tonge, A. M. Turing, Leonard Uhr, Charles Vossler, and Alice K. Wolf. 

Hook & Norman, Origins of Cyberspace (2002) no. 599.

View Map + Bookmark Entry

Woodrow Bledsoe Originates of Automated Facial Recognition 1964 – 1966

From 1964 to 1966 Woodrow W. "Bledsoe, along with Helen Chan and Charles Bisson of Panoramic Research, Palo Alto, California, researched programming computers to recognize human faces (Bledsoe 1966a, 1966b; Bledsoe and Chan 1965). Because the funding was provided by an unnamed intelligence agency, little of the work was published. Given a large database of images—in effect, a book of mug shots—and a photograph, the problem was to select from the database a small set of records such that one of the image records matched the photograph. The success of the program could be measured in terms of the ratio of the answer list to the number of records in the database. Bledsoe (1966a) described the following difficulties:

" 'This recognition problem is made difficult by the great variability in head rotation and tilt, lighting intensity and angle, facial expression, aging, etc. Some other attempts at facial recognition by machine have allowed for little or no variability in these quantities. Yet the method of correlation (or pattern matching) of unprocessed optical data, which is often used by some researchers, is certain to fail in cases where the variability is great. In particular, the correlation is very low between two pictures of the same person with two different head rotations.'

"This project was labeled man-machine because the human extracted the coordinates of a set of features from the photographs, which were then used by the computer for recognition. Using a GRAFACON, or RAND TABLET, the operator would extract the coordinates of features such as the center of pupils, the inside corner of eyes, the outside corner of eyes, point of widows peak, and so on. From these coordinates, a list of 20 distances, such as width of mouth and width of eyes, pupil to pupil, were computed. These operators could process about 40 pictures an hour. When building the database, the name of the person in the photograph was associated with the list of computed distances and stored in the computer. In the recognition phase, the set of distances was compared with the corresponding distance for each photograph, yielding a distance between the photograph and the database record. The closest records are returned.

"This brief description is an oversimplification that fails in general because it is unlikely that any two pictures would match in head rotation, lean, tilt, and scale (distance from the camera). Thus, each set of distances is normalized to represent the face in a frontal orientation. To accomplish this normalization, the program first tries to determine the tilt, the lean, and the rotation. Then, using these angles, the computer undoes the effect of these transformations on the computed distances. To compute these angles, the computer must know the three-dimensional geometry of the head. Because the actual heads were unavailable, Bledsoe (1964) used a standard head derived from measurements on seven heads.

"After Bledsoe left PRI [Panoramic Research, Inc.] in 1966, this work was continued at the Stanford Research Institute, primarily by Peter Hart. In experiments performed on a database of over 2000 photographs, the computer consistently outperformed humans when presented with the same recognition tasks (Bledsoe 1968). Peter Hart (1996) enthusiastically recalled the project with the exclamation, 'It really worked!' " (Faculty Council, University of Texas at Austin, In Memoriam Woodrow W. Bledsoe, accessed 05-15-2009).

Bledsoe, W. W. 1964. The Model Method in Facial Recognition, Technical Report PRI 15, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W., and Chan, H. 1965. A Man-Machine Facial Recognition System-Some Preliminary Results, Technical Report PRI 19A, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W. 1966a. Man-Machine Facial Recognition: Report on a Large-Scale Experiment, Technical Report PRI 22, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W. 1966b. Some Results on Multicategory Patten Recognition. Journal of the Association for Computing Machinery 13(2):304-316.

Bledsoe, W. W. 1968. Semiautomatic Facial Recognition, Technical Report SRI Project 6693, Stanford Research Institute, Menlo Park, California.

View Map + Bookmark Entry

Joseph Weizenbaum Writes ELIZA: A Pioneering Experiment in Artificial Intelligence Programming 1964 – 1966

Between 1964 and 1966 German and American computer scientist Joseph Weizenbaum at MIT wrote the computer program ELIZA. This program, named after the ingenue in George Bernard Shaw's play Pygmalion, was an early example of primitive natural language processing. The program operated by processing users' responses to scripts, the most famous of which was DOCTOR, which was capable of engaging humans in a conversation which bore a striking resemblance to one with an empathic psychologist. Weizenbaum modeled its conversational style after Carl Rogers, who introduced the use of open-ended questions to encourage patients to communicate more effectively with therapists. The program applied pattern matching rules to statements to figure out its replies. Using almost no information about human thought or emotion, DOCTOR sometimes provided a startlingly human-like interaction.

"When the "patient" exceeded the very small knowledge base, DOCTOR might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?" A possible response to "My mother hates me" would be "Who else in your family hates you?" ELIZA was implemented using simple pattern matching techniques, but was taken seriously by several of its users, even after Weizenbaum explained to them how it worked. It was one of the first chatterbots in existence" (Wikipedia article on ELIZA, accessed 06-15-2014).

"Weizenbaum was shocked that his program was taken seriously by many users, who would open their hearts to it. He started to think philosophically about the implications of artificial intelligence and later became one of its leading critics.

"His influential 1976 book Computer Power and Human Reason displays his ambivalence towards computer technology and lays out his case: while Artificial Intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. Choice, however, is the product of judgment, not calculation. It is the capacity to choose that ultimately makes us human. Comprehensive human judgment is able to include non-mathematical factors, such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for comparison" (Wikipedia article on Joseph Weizenbaum, accessed 06-15-2014).

View Map + Bookmark Entry

Solomonoff Begins Algorithmic Information Theory March – June 1964

In March and June, 1964 American mathematician and researcher in artificial intelligence Ray Solomonoff published "A Formal Theory of Inductive Inference, Part I" Information and Control, 7, No. 1, 1-22,  and  "A Formal Theory of Inductive Inference, Part II," Information and Control, 7, No. 2,  224-254. This two-art paper is considered the beginning of algorithmic information theory.

Solomonoff first described his results at a Conference at Caltech, 1960, and in a report of February, 1960: "A Preliminary Report on a General Theory of Inductive Inference."

View Map + Bookmark Entry

Irving John Good Originates the Concept of the Technological Singularity 1965

In 1965 British mathematician Irving John Good, originally named Isidore Jacob Gudak, published "Speculations Concerning the First Ultraintelligent Machine," Advances in Computers, vol. 6 (1965) 31ff. This paper, published while Good held research positions at Trinity College, Oxford and at Atlas Computer Laboratory, originated the concept later known as "technological singularity," which anticipates the eventual existence of superhuman intelligence:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." 

Stanley Kubrick consulted Good regarding aspects of computing and artificial intelligence when filming 2001: A Space Odyssey (1968), one of whose principal characters was the paranoid HAL 9000 supercomputer.

View Map + Bookmark Entry

Feigenbaum, Djerassi & Lederberg Develop DENDRAL the First Expert System 1965

In 1965 artificial intelligence researcher Edward Feigenbaum, chemist Carl Djerassi, and molecular biologist Joshua Lederberg, began their collaboration at Stanford University on Dendral, a long-term pioneer project in artificial intelligence that is considered the first computer software expert system

"In the early 1960s, Joshua Lederberg started working with computers and quickly became tremendously interested in creating interactive computers to help him in his exobiology research. Specifically, he was interested in designing computing systems that to help him study alien organic compounds. As he was not an expert in either chemistry or computer programming, he collaborated with Stanford chemist Carl Djerassi to help him with chemistry, and Edward Feigenbaum with programming, to automate the process of determining chemical structures from raw mass spectrometry data. Feigenbaum was an expert in programming languages and heuristics, and helped Lederberg design a system that replicated the way Carl Djerassi solved structure elucidation problems. They devised a system called Dendritic Algorithm (Dendral) that was able to generate possible chemical structures corresponding to the mass spectrometry data as an output" (Wikipedia article on Dendral, accessed 12-22-2013).

Lindsay, Buchanan, Feigenbaum, Lederberg, Applications of Artificial Intelligence for Organic Chemistry. The DENDRAL Project (1980).

View Map + Bookmark Entry

John Alan Robinson Introduces the Resolution Principle January 1965

In 1965 philosopher, mathematician and computer scientist John Alan Robinson, while at Rice University, published "A Machine-Oriented Logic Based on the Resolution Principle", Communications of the ACM, 5: 23–41. This paper introduced the resolution principle, a standard of logical deduction in AI applications.

View Map + Bookmark Entry

Stanford Research Institute Develops Shakey, the First Intelligent Mobile Robot 1966 – 1972

Developed from approximately 1966 through 1972, Shakey the robot was the first general-purpose mobile robot that could "reason" about its own actions. While other robots at the time had to be instructed step by step in order to complete a larger task, Shakey could analyze commands and break them down into basic steps by itself. "Shakey could perceive its surroundings, create plans, recover from errors that occurred while executing a plan, and communicate with people using ordinary English." 

Shakey was developed by the Artificial Intelligence Center at Stanford Research Institute (now SRI International) in a project funded by DARPA intended to create “intelligent automata” for “reconnaissance" applications. Because the project combined research in robotics, computer vision, and natural language processing, it was the first successful project that combined logical reasoning with physical action. 

"Shakey's overall software design has influenced the design of everything from driverless cars to undersea exploration robots. 

"Shakey's planning methodology has been used in applications ranging from planning beer production at breweries to planning the actions of characters in video games. 

"Variants of Shakey's route-finding software compute your driving directions here on earth, as well as driving directions for the Mars Curiosity rover. (Note that Curiosity is quite a “reconnaissance application”!) 

"Image analysis techniques that enabled Shakey to perceive its world are similarly used to alert today's drivers of cars that may be drifting out of lane" (C[computer]H[istory]M[useum] News, June 1, 2015). 

(This entry was written on the Oceania Riviera off the coast of Sicily in June 2015.) 

View Map + Bookmark Entry

Ellis Batten Page Begins Automated Essay Scoring 1967

In 1964 American educational psychologist at the University of Connecticut (StorrsEllis Batten Page, inspired by developments in computational linguistics and artificial intelligence, began research on automated essay scoring. Page published his initial research in 1967 as "Statistical and linguistic strategies in the computer grading of essays," Coling 1967: Conférence Internationale sur le Traitement Automatique des Langues, Grenoble, France, August 1967.  The same year he also published "The imminence of grading essays by computer," Phi Delta Kappan, 47 (1967) 238-243. The following year he published, with Dieter H. Paulus  The analysis of essays by computer (Final report, Project No. 6-1318). Washington, D. C.: Department of Health, Education, and Welfare; Office of Education; Bureau of Research. That year he published his successful work with a program he called Project Essay Grade (PEG) in "The Use of the Computer in Analyzing Student Essays," International Review of Education, 14(3), 253-263. Page's work is considered the beginning of automated essay scoring, the development of which could not become cost effective until computing became far cheaper and more pervasive in the 1990s. 

Later at Duke University, Page renewed his development and research in automated scoring and, in 1993, formed Tru-Judge, Inc., anticipating the potential for commercial applications of the software. In 2002, and in declining health, Page sold the intellectual property assets of Tru-Judge to Measurement Incorporated, educational company that provides achievement tests and scoring services for state governments, other testing companies and various organizations and institutions.

View Map + Bookmark Entry

Stanley Kubrick & Arthur C. Clarke Create "2001: A Space Odyssey" 1968

In 1968 the film 2001: A Space Odyssey, written by American film director Stanley Kubrick in collaboration with science fiction writer and futurist Arthur C. Clarke, captured imaginations with the idea of a computer that could see, speak, hear, and “think.” 

Perhaps the star of the film was the HAL 9000 computer. "HAL (Heuristically programmed ALgorithmic Computer) is an artificial intelligence, the sentient on-board computer of the spaceship Discovery. HAL is usually represented only as his television camera "eyes" that can be seen throughout the Discovery spaceship.... HAL is depicted as being capable not only of speech recognition, facial recognition, and natural language processing, but also lip reading, art appreciation, interpreting emotions, expressing emotions, reasoning, and chess, in addition to maintaining all systems on an interplanetary voyage.

"HAL is never visualized as a single entity. He is, however, portrayed with a soft voice and a conversational manner. This is in contrast to the human astronauts, who speak in terse monotone, as do all other actors in the film" (Wikipedia article on HAL 9000, accessed 05-24-2009).

"Kubrick and Clarke had met in New York City in 1964 to discuss the possibility of a collaborative film project. As the idea developed, it was decided that the story for the film was to be loosely based on Clarke's short story "The Sentinel", written in 1948 as an entry in a BBC short story competition. Originally, Clarke was going to write the screenplay for the film, but Kubrick suggested during one of their brainstorming meetings that before beginning on the actual script, they should let their imaginations soar free by writing a novel first, which the film would be based on upon its completion. 'This is more or less the way it worked out, though toward the end, novel and screenplay were being written simultaneously, with feedback in both directions. Thus I rewrote some sections after seeing the movie rushes -- a rather expensive method of literary creation, which few other authors can have enjoyed.' The novel ended up being published a few months after the release of the movie" (Wikipedia article on Arthur C. Clarke, accessed 05-24-2009).

View Map + Bookmark Entry

Licklider & Taylor Describe Features of the Future ARPANET; Description of a Computerized Personal Assistant April 1968

In 1968 American psychologist and computer scientist J.C.R. Licklider of MIT and Robert W. Taylor, then director of ARPA's Information Processing Techniques Office, published "The Computer as a Communication Device," Science and Technology, April 1968. In this paper, extensively illustrated with whimsical cartoons, they described features of the future ARPANET and other aspects of anticipated human-computer interaction.

Honoring the artificial intelligence pioneer Oliver Selfridge, on pp. 38-39 of the paper they proposed a device they referred to as OLIVER (On-Line Interactive Vicarious Expediter and Responder). OLIVER was one of the clearest early descriptions of a computerized personal assistant:

"A very important part of each man's interaction with his on-line community will be mediated by his OLIVER. The acronym OLIVER honors Oliver Selfridge, originator of the concept. An OLIVER is, or will be when there is one, an 'on-line interactive vicarious expediter and responder,' a complex of computer programs and data that resides within the network and acts on behalf of its principal, taking care of many minor matters that do not require his personal attention and buffering him from the demanding world. 'You are describing a secretary,' you will say. But no! secretaries will have OLIVERS.

"At your command, your OLIVER will take notes (or refrain from taking notes) on what you do, what you read, what you buy and where you buy it. It will know who your friends are, your mere acquiantances. It will know your value structure, who is prestigious in your eyes, for whom you will do with what priority, and who can have access to which of your personal files. It will know your organizations's rules pertaining to proprietary information and the government's rules relating to security classification.

"Some parts of your OLIVER program will be common with parts of ther people's OLIVERS; other parts will be custom-made for you, or by you, or will have developed idiosyncracies through 'learning based on its experience at your service."

View Map + Bookmark Entry

1970 – 1980

Negroponte's "The Architecture Machine" is Published 1970

In his book, The Architecture Machine, published in 1970 architect and computer scientist Nicholas Negroponte of MIT described early research on computer-aided design, and in so doing covered early work on human-computer interaction, artificial intelligence, and computer graphics. The book contained a large number of illustrations.

"Most of the machines that I will be discussing do not exist at this time. The chapters are primarily extrapolations into the future derived from experiences with various computer-aided design systems. . . .

"There are three possible ways in which machines can assist the design process: (1) current procedures can be automated, thus speeding up and reducing the cost of existing practices; (2) existing methods can be altered to fit within the specifications and constitution of a machine, where only those issues are considered that are supposedly machine-compatible; (3) the design process, considered as evolutionary, can be presented to a machine, also considered as evolutionary, and a mutal training, resilience, and growth can be developed" (From Negroponte's "Preface to a Preface," p. [6]).

Negroponte's book has been called the first book on the personal computer. On that I do not agree. The book contains only vague discussions of the possiblity of eventual personal computers. Most specifically it says, as caption to its second illustration, a cartoon relating to a home computer, "The computer at home is not a fanciful concept. As the cost of computation lowers, the computer utility will become a consumer item, and every child should have one." Instead The Architecture Machine may be the first book on human-computer interaction, and on the possibilities of computer-aided design.

(This entry was last revised on 04-20-2014.)

View Map + Bookmark Entry

PARRY: An Artificial Intelligence Program with "Attitude" 1972

PARRY, a computer program written in LISP in 1972 by American psychiatrist Kenneth Colby, then at Stanford University, attempted to simulate a paranoid schizophrenic. The program implemented a crude model of the behavior of a paranoid schizophrenic based on concepts, conceptualizations, and beliefs (judgments about conceptualizations: accept, reject, neutral). As it embodied a conversational strategy, it was more serious and advanced than Joseph Weizenbaum's ELIZA (1964-66). PARRY was described as "ELIZA with attitude".

"PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the 'patients; were human and which were computer programs. The psychiatrists were able to make the correct identification only 48 percent of the time — a figure consistent with random guessing.

"PARRY and ELIZA (also known as "the Doctor") 'met' several times.The most famous of these exchanges occurred at the ICCC 1972, where PARRY and ELIZA were hooked up over ARPANET and 'talked' to each other" (Wikipedia article on PARRY, accessed 06-15-2014).

View Map + Bookmark Entry

Foundation of the American Association for Artificial Intelligence 1979

In 1979 the American Association for Artificial Intelligence was founded in Menlo Park, California. In 2007 the organization changed its name to the Association for the Advancement of Artificial Intelligence. By 2009 it had over 6,000 members worldwide.

View Map + Bookmark Entry

The Neocognitron, Perhaps the Earliest Multilayered Artificial Neural Network 1979

The Neocognitron, a hierarchical multilayered artificial neural network which acquires the ability to recognize visual patterns through learning, may be one of the earliest examples of what was later called "deep learning." It was invented in 1979 by Kunihiko Fukushima while at NHK Science & Technical Research Laboratories (STRL, NHK放送技術研究所, NHK Hōsō Gijutsu Kenkyūjo), headquartered in Setagaya, Tokyo.  The Neocognitron was used for handwritten character recognition and other pattern recognition tasks.

"The extension of the neocognitron is still continuing. By the introduction of top-down connections and new learning methods, various kinds of neural networks have been developed. When two or more patterns are presented simultaneously, the "Selective Attention Model " can segment and recognize individual patterns in tern by switching its attention. Even if a pattern is partially occluded by other objects, we human beings can often recognize the occluded pattern. An extended neocognitron can now have such human-like ability and can, not only recognize occluded patterns, but also restore them by completing occluded contours" (http://personalpage.flsi.or.jp/fukushima/index-e.html.  accessed 11-10-2014).

K. Fukushima,"Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position," Biological Cybernetics, 36 (1980) 93-202.

View Map + Bookmark Entry

1980 – 1990

Defining a General Framework for Studying Complex Biological Systems 1982

In 1982 Vision: A Computational Investigation into the Human Representation and Processing of Visual Information by the British neuroscientist David Marr, a professor at MIT, was published posthumously in New York. This work defined a general framework for studying complex biological systems.

"According to Marr, a complex biological system can be understood at three distinct levels. The first level ("computational level") describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain's identification of the objects present in the image we had observed. The second level ("algorithmic level") describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level ("implementation level") describes how our own biological hardware of cells implements the procedure described by the algorithmic level" (Yarden Katz, "Noam Chomsky on Where Artificial Intelligence Went Wrong," Atlantic Monthly, 11-1-2012).

View Map + Bookmark Entry

The First Book Written by a Computer Program 1984

Detail from cover of The Policeman's Beard is Half Constructed, the first book written by a computer program.  Please click on image to see image of entire cover of book.

In 1984 American writer and programmer William Chamberlain of New York published The Policeman’s Beard is Half Constructed, a volume of prose and poetry that, except for Chamberlain's introduction, was entirely written by a computer program called RACTER that had been developed by Chamberlain with Thomas Etter. The program was given credit for authorship on the title page which read: The Policeman's Beard is Half Constructed. Computer Prose and Poetry by Racter. Illustrations by Joan Hall. Introduction by William Chamberlain. The bright red cover of the paperback stated that this was "The First Book Ever Written by a Computer." It also called it "A Bizarre and Fantastic Journey into the Mind of a Machine." The blurb stated that the book contained:

"• Poetry and limericks

"• Imaginatige Dialogues

"• Aphorisms

"• Interviewss

"• The published short story , "Soft Ions" and more.

"You are about to enter a strange, deranged, and awesome world of images and fantasies– the 'thoughts' of the most advanced prose-creating computer program today."

The program, the name of which was an abbreviation for raconteur, could generate grammatically consistent sentences with the help of a pre-coded grammar template. Although certainly readable in the sense that each sentence displayed a competent grammar, any anxiety that the program could replace human authors would have been put to rest after a single glance at the computer-generated narrative:

"At all events my own essays and dissertations about love and its endless pain and perpetual pleasure will be known and understood by all of you who read this and talk or sing or chant about it to your worried friends or nervous enemies. Love is the question and the subject of this essay. We will commence with a question: does steak love lettuce? This question is implacably hard and inevitably difficult to answer. Here is a question: does an electron love a proton, or does it love a neutron? Here is a question: does a man love a woman or, to be specific and to be precise, does Bill love Diane? The interesting and critical response to this question is: no! He is obsessed and infatuated with her. He is loony and crazy about her. That is not the love of steak and lettuce, of electron and proton and neutron. This dissertation will show that the love of a man and a woman is not the love of steak and lettuce. Love is interesting to me and fascinating to you but it is painful to Bill and Diane. That is love!" 

According to Chamberlain's introduction to the book, RACTER ran on a CP/M machine. It was written in "compiled BASIC on a Z80 micro with 64K of RAM." 

The book was imaginatively published by Warner Books, extensively illustrated with black and white collages combining 19th century imagery with computer graphics by New York artist Joan Hall.

Describing the "author," the book stated on its first preliminary page:

"The Author: Racter (the name is short for raconteur) is the most highly developed artificial writer in the field of prose synthesis today. Fundamentally different from artifical intelligence programming, which tries to replicate human thinking, Racter can write original work without promptings from a human operator. And according to its programmer, 'Once it's running, Racter needs no input from the outside world. It's just cooking by itself.' Racter's work has appeared in OMNI magazine and in 1983 was the subject of a special exhibit at the Whitney Museum in New York. Now at work on a first novel, Racter operates on an IMS computer in New York's Greenwich Village, where it shares an apartment with a human computer programmer."

View Map + Bookmark Entry

The First Book on Neuromorphic Computing 1984

In 1984 professor of electrical engineering and computer science at Caltech Carver Mead published Analog VLSI and Neural SystemsThis was first book on neuromorphic engineering or neuromorphic computing—a concept developed by Mead, that involves 

"... the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times the term neuromorphic has been used to describe analog, digital, and mixed-mode analog/digital VLSI and software systems that implement models of neural systems (for perceptionmotor control, or multisensory integration).

"A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change, " Wikipedia article on Neuromorphic engineering, accessed 01-01-2014.)

View Map + Bookmark Entry

George A. Miller Begins WordNet, a Lexical Database 1985

In 1985 psychologist and cognitive scientist George A. Miller and his team at Princeton began development of WordNet, a lexical database for the English language.


"groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. The purpose is twofold: to produce a combination of dictionary and thesaurus that is more intuitively usable, and to support automatic text analysis and artificial intelligence applications" (Wikipedia article on WordNet).

You can browse Wordnet at http://wordnet.princeton.edu/.

WordNet has been used for a number of different purposes in information systems, including word sense disambiguation, information retrieval, automatic text classification, automatic text summarization, and even automatic crossword puzzle generation.

View Map + Bookmark Entry

Kasparov Defeats 32 Different Chess Computers 1985

"In 1985, in Hamburg, I played against thirty-two different chess computers at the same time in what is known as a simultaneous exhibition. I walked from one machine to the next, making my moves over a period of more than five hours. The four leading chess computer manufacturers had sent their top models, including eight named after me from the electronics firm Saitek.  

"It illustrates the state of computer chess at the time that it didn't come as much of a surprise when I achieved a perfect 32–0 score, winning every game, although there was an uncomfortable moment. At one point I realized that I was drifting into trouble in a game against one of the "Kasparov" brand models. If this machine scored a win or even a draw, people would be quick to say that I had thrown the game to get PR for the company, so I had to intensify my efforts. Eventually I found a way to trick the machine with a sacrifice it should have refused. From the human perspective, or at least from my perspective, those were the good old days of man vs. machine chess" (Gary Kasparov, "The Chess Master and the Computer," The New York Review of Books 57 February 11, 2010.

View Map + Bookmark Entry

The First Analog Silicon Retina 1988

With his student Misha Mahowald, computer scientist Carver Mead at Caltech described the first analog silicon retina in "A Silicon Model of Early Visual Processing," Neural Networks 1 (1988) 91−97. The silicon retina used analog electrical circuits to mimic the biological functions of rod cellscone cells, and other non-photoreceptive cells in the retina of the eye. It was the first example of using continuously-operating floating gate (FG) programming/erasing techniques— in this case UV light— as the backbone of an adaptive circuit technology. The invention was not only potentially useful as a device for restoring sight to the blind, but it was also one of the most eclectic feats of electrical and biological engineering of the time.

"The approach to silicon models of certain neural computations expressed in this chip, and its successors, foreshadowed a totally new class of physically based computations inspired by the neural paradigm. More recent results demonstrated that a wide range of visual and auditory computations of enormous complexity can be carried out in minimal area and with minute energy dissipation compared with digital implementations" (http://www.cns.caltech.edu/people/faculty/mead/carver-contributions.pdf, accessed 12-23-2013).

In 1992 Mahowald received her Ph.D. under Mead at Caltech with her thesis, VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function. 

View Map + Bookmark Entry

1990 – 2000

Development of Neural Networks 1993

In 1993 Psychologist, neuroscientist and cognitive scientist James A. Anderson of Brown University, Providence, RI, published "The BSB Model: A simple non-linear autoassociative network," M. Hassoun (Ed), Associative Neural Memories: Theory and Implementation (1993).  Anderson's neural networks were applied to models of human concept formation, decision making, speech perception, and models of vision.

Anderson, J. A., Spoehr, K. T. and Bennett, D.J.  "A study in numerical perversity: Teaching arithmetic to a neural network,"  D.S. Levine and M. Aparicio (Eds.) Neural Networks for Knowledge Representation and Inference, (1994).

View Map + Bookmark Entry

The Spread of Data-Driven Research From 1993 to 2013 1993 – 2013

On p. 16 of the printed edition of California Magazine 124, Winter 2013, there was an unsigned sidebar headlined "Data U." It contained a chart showing the spread of computing, or data-driven research, during the twenty years from 1993 to 2013, from a limited number of academic disciplines in 1993 to nearly every facet of university research.

According to the sidebar, in 1993 data-driven research was part of the following fields:

Artificial Intelligence: machine learning, natural language processing, vision, mathematical models of cognition and learning

Chemistry: chemical or biomolecular engineering

Computational Science: computational fluid mechanics, computational materials sciences

Earth and Planetary Science: climate modeling, seismology, geographic information systems

Marketing: online advertising, comsumer behavior

Physical Sciences: astronomy, particle physics, geophysics, space sciences

Signal Processing: compressed sensing, inverse imagining


By the end of 2013 data-driven research was pervasive not only in the fields listed above, but also in the following fields:

Biology: genomics, proteomics, econinformatics, computational cell biology

Economics: macroeconomic policy, taxation, labor economics, microeconomics, finance, real estate

Engineering: sensor networks (traffic control, energy-efficient buildings, brain-machine interface)

Environomental Sciences: deforestation, climate change, impacts of pollution

Humanities: digital humanities, archaeology, land use, cultural geography, cultural heritage

Law: privacy, security, forensics, drug/human/CBRNe trafficking, criminal justice, incarceration, judicial decision making, corporate law

Linguistics: historical linguistics, corpus linguistics, psycholinguistics, language and cognition

Media: social media, mobile apps, human behavior

Medicine and Public Health: imaging, medical records, epidemiology, environmental conditions, health

Neuroscience: fMRI, multi-electrode recordings, theoretical neuroscience

Politcal Science & Public Policy: voter turn-out, elections, political behavior social welfare, poverty, youth policy, educational outcomes

Psychology: social psychology

Sociology & Demography: social change, stratification, social networks, population health, aging immigration, family

Urban Planning: transportation studies, urban environments

View Map + Bookmark Entry

The Singularity January 1993

Mathematician, computer scientist and science fiction writer Vernor Vinge called the creation of the first ultraintelligent machine the Singularity in the January 1993 Omni magazine. Vinge's follow-up paper entitled "What is the Singularity?" presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center( now NASA John H. Glenn Research Center at Lewis Field) and the Ohio Aerospace Institute, March 30-31, 1993, and  slightly changed in the Winter 1993 issue of Whole Earth Review, contained the oft-quoted statement,

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."

"Vinge refines his estimate of the time scales involved, adding, 'I'll be surprised if this event occurs before 2005 or after 2030.

"Vinge continues by predicting that superhuman intelligences, however created, will be able to enhance their own minds faster than the humans that created them. 'When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid.' This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period of time" (Wikipedia article on Technological singularity, accessed 05-24-2009).

View Map + Bookmark Entry

The First Defeat of a Human Champion by a Computer in a Game Compeition 1994

At the Second Man-Machine World Championship in 1994, Chinook, a computer checkers program developed around 1989 at the University of Alberta by a team led by Jonathan Schaeffer, won due to human frailty. This was the first time that a computer program defeated a human champion in a game competition.

 "In 1996 the Guinness Book of World Records recognized Chinook as the first program to win a human world championship" (http://webdocs.cs.ualberta.ca/~chinook/project/, accessed 01-24-2010).

View Map + Bookmark Entry

Kasparov Loses to Deep Blue: The First Time a Human Chess Player Loses to a Computer Under Tournament Conditions May 11, 1997

On May 11, 1997 Gary Kasparov, sometimes regarded as the greatest chess player of all time, resigned 19 moves into Game 6 against Deep Blue, an IBM RS/6000 SP supercomputer capable of calculating 200 million chess positions per second. This was the first time that a human world chess champion lost to a computer under tournament conditions.

The event, which took place at the Equitable Center in New York, was broadcast live from IBM's website via a Java viewer, and became the world's record "Net event" at the time.

"Since the emergence of artificial intelligence and the first computers in the late 1940s, computer scientists compared the performance of these 'giant brains' with human minds, and gravitated to chess as a way of testing the calculating abilities of computers. The game is a collection of challenging problems for minds and machines, but has simple rules, and so is perfect for such experiments.

"Over the years, many computers took on many chess masters, and the computers lost.

"IBM computer scientists had been interested in chess computing since the early 1950s. In 1985, a graduate student at Carnegie Mellon University, Feng-hsiung Hsu, began working on his dissertation project: a chess playing machine he called ChipTest. A classmate of his, Murray Campbell, worked on the project, too, and in 1989, both were hired to work at IBM Research. There, they continued their work with the help of other computer scientists, including Joe Hoane, Jerry Brody and C. J. Tan. The team named the project Deep Blue. The human chess champion won in 1996 against an earlier version of Deep Blue; the 1997 match was billed as a 'rematch.'

"The champion and computer met at the Equitable Center in New York, with cameras running, press in attendance and millions watching the outcome. The odds of Deep Blue winning were not certain, but the science was solid. The IBMers knew their machine could explore up to 200 million possible chess positions per second. The chess grandmaster won the first game, Deep Blue took the next one, and the two players drew the three following games. Game 6 ended the match with a crushing defeat of the champion by Deep Blue." 

"The AI crowd, too, was pleased with the result and the attention, but dismayed by the fact that Deep Blue was hardly what their predecessors had imagined decades earlier when they dreamed of creating a machine to defeat the world chess champion. Instead of a computer that thought and played chess like a human, with human creativity and intuition, they got one that played like a machine, systematically evaluating 200 million possible moves on the chess board per second and winning with brute number-crunching force. As Igor Aleksander, a British AI and neural networks pioneer, explained in his 2000 book, How to Build a Mind:  

" 'By the mid-1990s the number of people with some experience of using computers was many orders of magnitude greater than in the 1960s. In the Kasparov defeat they recognized that here was a great triumph for programmers, but not one that may compete with the human intelligence that helps us to lead our lives.'

"It was an impressive achievement, of course, and a human achievement by the members of the IBM team, but Deep Blue was only intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better" (Gary Kasparov, "The Chess Master and the Computer," The New York Review of Books, 57, February 11, 2010).

View Map + Bookmark Entry

Using Neural Networks for Word Sense Disambiguation 1998

In 1998 cognitive scientist / entrepreneur Jeffrey Stibel, physicist, psychologist, neural scientist James A. Anderson, and others from the Department of Cognitive and Linguistic Sciences at Brown University created a word sense disambiguator using George A. Miller's WordNet lexical database.

Stibel and others applied this technology in Simpli, "an early search engine that offered disambiguation to search terms. A user could enter in a search term that was ambiguous (e.g., Java) and the search engine would return a list of alternatives (coffee, programming language, island in the South Seas)."

"The technology was rooted in brain science and built by academics to model the way in which the mind stored and utilized language."

"Simpli was sold in 2000 to NetZero. Another company that leveraged the Simpli WordNet technology was purchased by Google and they continue to use the technology for search and advertising under the brand Google AdSense.

"In 2001, there was a buyout of the company and it was merged with another company called Search123. Most of the original members joined the new company. The company was later sold in 2004 to ValueClick, which continues to use the technology and search engine to this day" (Wikipedia article on Simpli, accessed 05-10-2009).

View Map + Bookmark Entry

The First "Advanced" or "Freestyle" or "Centaur" Chess Event June 1998

The first Advanced Chess event, in which each human player used a computer chess program to help him explore the possible results of candidate moves, was held in June 1998 in León, Spain. The match was played between Garry Kasparov, using the German chess program Fritz 5, and Veselin Topalov, using ChessBase 7.0. The analytical engines used, such as FritzHIARCS and Junior, were integrated into these two programs, and could have been called at a click of the mouse. It was a 6-game match, and it was arranged in advance that the players would consult the built-in million games databases only for the 3rd and 4th game, and would only use analytical engines without consulting the databases for the remaining games. The time available to each player during the games was 60 minutes. The match ended in a 3-3 tie.

Since the first event Advanced Chess matches were often called Freestyle chess, in which players can play without computer assistance, or can simply follow the directions of a computer program, or can play as a "centaur", listening to the moves advocated by the AI but occasionally overriding them. In 2014 the best Freestyle chess player was Intagrand, a team of humans and several different chess programs.

View Map + Bookmark Entry

2000 – 2005

A Model of Cortical Processing as an Electronic Circuit of 16 "Neurons" that Could Select and Amplify Input Signals Much Like the Cortex of the Mammalian Brain 2000

In 2000 a research team from the Institute of Neuroinformatics ETHZ/UNI Zurich; Bell Laboratories, Murray Hill, NJ; and the Department of Brain and Cognitive Sciences & Department of Electrical Engineering and Computer Science at MIT created an electrical circuit of 16 "neurons" that could select and amplify input signals much like the cortex of the mammalian brain.

"Digital circuits such as the flip-flop use feedback to achieve multi-stability and nonlinearity tor estore signals to logical levels, for example 0 and 1. Analogue feedback circuits are generally designed to operate linearly, so that signals are over a range, and the response is unique. By contrast, the response of cortical circuits to sensory stimulation can be both multistable and graded. We propose that the neocortex combines digital selection of an active set of neurons with analogue response by dynamically varying the postive feedback inherent in its recurrent connections. Strong postive feedback causes differential instabilities that drive the selection of a set of active neurons under the constraints embedded in the synaptic weights. Once selected, the active neurons generate weaker, stable feedback that provides analogue amplication of the input. Here we present our model of cortical processing as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex" (Abstract)

R. Hahnloser, R. Sarpeshkar, M. Mahowald, R.J. Douglas and S. Seung: "Digital selection and analog amplification co-exist in an electronic circuit inspired by neocortex", Nature 405 (2000) 947-951. 

View Map + Bookmark Entry

Conceiving and Building a Machine-Readable Database to Present Information "Filtered, Selected and Presented According to the Needs of the Individual User" 2000 – 2007

In 2000 American inventor, scientist, engineer, entrepreneur, and author William Daniel "Danny" Hillis wrote a paper entitled Aristotle (The Knowledge Web). In 2007, at the time of founding Metaweb Technologies to develop aspects of ideas expressed in his Aristotle paper, Hillis wrote:

"In retrospect the key idea in the "Aristotle" essay was this: if humans could contribute their knowledge to a database that could be read by computers, then the computers could present that knowledge to humans in the time, place and format that would be most useful to them.  The missing link to make the idea work was a universal database containing all human knowledge, represented in a form that could be accessed, filtered and interpreted by computers.

"One might reasonably ask: Why isn't that database the Wikipedia or even the World Wide Web? The answer is that these depositories of knowledge are designed to be read directly by humans, not interpreted by computers. They confound the presentation of information with the information itself. The crucial difference of the knowledge web is that the information is represented in the database, while the presentation is generated dynamically. Like Neal Stephenson's storybook, the information is filtered, selected and presented according to the specific needs of the viewer. ["In his book Diamond Age, the science fiction writer Neil Stephenson describes an automatic tutor called The Primer that grows up with a child. Stephenson's Primer does everything described above and more. It becomes a friend and playmate to the heroine of the novel, and guides not only her intellectual but also her emotional development" (from Hillis's Aristotle, 2000). 

"John, Robert and I started a project,  then a company, to build that computer-readable database. How successful we will be is yet to be determined, but we are really trying to build it:  a universal database for representing any knowledge that anyone is willing to share. We call the company Metaweb, and the free database, Freebase.com. Of course it has none of the artificial intelligence described in the essay, but it is a database in which each topic is connected to other topics by links that describe their relationship. It is built so that computers can navigate and present it to humans. Still very primitive, a far cry from Neal Stephenson's magical storybook, it is a step, I hope, in the right direction" (http://edge.org/conversation/addendum-to-aristotle-the-knowledge-web, accessed 02-02-2014).

View Map + Bookmark Entry

The Film: "A. I. Artificial Intelligence" 2001

Steven Spielberg

The movie poster for A.I. Artificial Intelligence

Stanley Kubrick

In 2001 American director, screen writer and film producer Steven Spielberg directed, co-authored and produced, through DreamWorks and Amblin Entertainment, the science fiction film A.I. Artificial Intelligence, telling the story of David, an android robot child programmed with the ability to love and to dream. The film explored the hopes and fears involved with efforts to simulate human thought processes, and the social consequences of creating robots that may be better than people at specialized tasks.

The film was a 1970s project of Stanley Kubrick, who eventually turned it over to Spielberg. The project languished in development hell for nearly three decades before technology advanced sufficiently for a successful production. The film required enormously complex puppetry, computer graphics, and make-up prosthetics, which are well-described and explained in the supplementary material in the two-disc special edition of the film issued on DVD in 2002.

View Map + Bookmark Entry

2005 – 2010

A More Efficient Way to Teach Individual Layers of Neurons for Deep Learning 2006

In the mid-1980s, British-born computer scientist and psychologist Geoffrey Hinton and others helped revive resarch interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required major human intervention: programmers had to label data before feeding it to the network, and complex speech or image recognition required more computer power than was available.

During the first decade of the 21st century Hinton and colleagues at the University of Toronto made some fundamental conceptual breakthroughs that have led to advances in unsupervised learning procedures for neural networks with rich sensory input.

"In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects" (Robert D. Hof, "Deep Learning," MIY Technology Review, April 23, 2013, accessed 11-10-2014).

Hinton, G. E.; Osindero, S.; Teh, Y., "A fast learning algorithm for deep belief nets", Neural Computation 18 #7 (2006) 1527–1554.

View Map + Bookmark Entry

IBM Begins Development of Watson, the First Cognitive Computer 2007

David Ferrucci

The Watson deep question answering computing system lab

The Watson Research Center

In 2007 David Ferrucci, leader of the Semantic Analysis and Integration Department at IBM’s Watson Research Center, Yorktown Heights, New York,  and his team began development of Watson, a special-purpose computer system designed to push the envelope on deep question and answering, deep analytics, and the computer's understanding of natural language. "Watson" became the firstg cognitive computer, combinding machine learning and artificial intelligence.

View Map + Bookmark Entry

Checkers is "Solved" April 29, 2007

Jonathan Shaeffer with a checkers board after "solving" the game of checkers

The University of Alberta seal

Jonathan Schaeffer and his team at the University of Alberta announced on April 29, 2007 that the game of checkers was "solved". Perfect play led to a draw.

"The crucial part of Schaeffer's computer proof involved playing out every possible endgame involving fewer than 10 pieces. The result is an endgame database of 39 trillion positions. By contrast, there are only 19 different opening moves in draughts. Schaeffer's proof shows that each of these leads to a draw in the endgame database, providing neither player makes a mistake.  

"Schaeffer was able to get his result by searching only a subset of board positions rather than all of them, since some of them can be considered equivalent. He carried out a mere 1014 calculations to complete the proof in under two decades. 'This pushes the envelope as far as artificial intelligence is concerned,' he says.  

"At its peak, Schaeffer had 200 desktop computers working on the problem full time, although in later years he reduced this to 50 or so. 'The problem is such that if I made a mistake 10 years ago, all the work from then on would be wrong,' says Schaeffer. 'So I've been fanatical about checking for errors.' " (http://www.newscientist.com/article/dn12296-checkers-solved-after-years-of-number-crunching.html, accessed 01-24-2010).

Based on this proof, Schaeffer's checkers-playing program Chinook, could no longer be beaten. The best an opponent could hope for is a draw.

View Map + Bookmark Entry

The SyNAPSE Neuromorphic Machine Technology Project Begins 2008

Traditional stored-program von Neumann computers are constrained by physical limits, and require humans to program how computers interact with their environments. In contrast the human brain processes information autonomously, and learns from its environment. Neuromorphic electronic machines— computers that function more like a brain— may enable autonomous computational solutions for real-world problems with many complex variables. In 2008 DARPA awarded the first funding to HRL Laboratories, Hewlett-Packard and IBM Research for SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics)—an attempt to build a new kind of cognitive computer with form, function and architecture similar to the mammalian brain. The program sought to create electronic systems inspired by the human brain that could understand, adapt and respond to information in ways fundamentally different from traditional computers.

"The initial phase of the SyNAPSE program developed nanometer scale electronic synaptic components capable of adapting the connection strength between two neurons in a manner analogous to that seen in biological systems (Hebbian learning), and simulated the utility of these synaptic components in core microcircuits that support the overall system architecture" (Wikipedia article on SyNAPSE, accessed 10-20-2013).

View Map + Bookmark Entry

Using Automation to Find "Fundamental Laws of Nature" April 3, 2009

Michael Schmidt and Hod Lipson of Cornell University published "Distilling Free-Form Natural Laws from Experimental Data," Science 3 April 2009: Vol. 324. no. 5923, pp. 81 - 85 DOI: 10.1126/science.1165893.  The paper described a computer program that sifted raw and imperfect data to uncover fundamental laws of nature.

"For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the "alphabet" used to describe those systems" (Abstract from Science)

View Map + Bookmark Entry

Robot Scientist becomes the First Machine to Discover New Scientific Knowledge April 3, 2009

Ross D. King, Jem Rowland and 11 co-authors from the Department of Computer Science at Aberystwyth University, Aberystwyth, Wales, and the University of Cambridge, published "The Automation of Science," Science 3 April 2009: Vol. 324. no. 5923, pp. 85 - 89 DOI: 10.1126/science.1165620. In this paper they described a Robot Scientist which the researchers believed was the first machine to have independently discovered new scientific knowledge. The robot, called Adam, was a computer system that fully automated the scientific process. 

"Prof Ross King, who led the research at Aberystwyth University, said: 'Ultimately we hope to have teams of human and robot scientists working together in laboratories'. The scientists at Aberystwyth University and the University of Cambridge designed Adam to carry out each stage of the scientific process automatically without the need for further human intervention. The robot has discovered simple but new scientific knowledge about the genomics of the baker's yeast Saccharomyces cerevisiae, an organism that scientists use to model more complex life systems. The researchers have used separate manual experiments to confirm that Adam's hypotheses were both novel and correct" (http://www.eurekalert.org/pub_releases/2009-04/babs-rsb032709.php).

"The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist "Adam," which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam's conclusions through manual experiments. To describe Adam's research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge" (Abstract in Science).

View Map + Bookmark Entry

The TV Show "Jeopardy" Provides a Good Model of the Semantic Analysis and Integration Problem April 22, 2009

On April 22, 2009 David Ferrucci, leader of the Semantic Analysis and Integration Department at IBM's T. J. Watson's Research Center, and Eric Nyberg, and several co-authors published the IBM Research Report: Towards the Open Advancement of Question Answering Systems.

Section 4.2.3. of the report included an analysis of why the television game show Jeopardy! provided a good model of the semantic analysis and integration problem.

View Map + Bookmark Entry

IBM's Watson Question Answering System Challenges Humans at "Jeopardy" April 27, 2009

On April 27, 2009 IBM announced that its Watson Question Answering (QA) System will challenge humans in the television quiz show Jeopardy!

"IBM is working to build a computing system that can understand and answer complex questions with enough precision and speed to compete against some of the best Jeopardy! contestants out there.

"This challenge is much more than a game. Jeopardy! demands knowledge of a broad range of topics including history, literature, politics, film, pop culture and science. What's more, Jeopardy! clues involve irony, riddles, analyzing subtle meaning and other complexities at which humans excel and computers traditionally do not. This, along with the speed at which contestants have to answer, makes Jeopardy! an enormous challenge for computing systems. Code-named "Watson" after IBM founder Thomas J. Watson, the IBM computing system is designed to rival the human mind's ability to understand the actual meaning behind words, distinguish between relevant and irrelevant content, and ultimately, demonstrate confidence to deliver precise final answers.

"Known as a Question Answering (QA) system among computer scientists, Watson has been under development for more than three years. According to Dr. David Ferrucci, leader of the project team, 'The confidence processing ability is key to winning at Jeopardy! and is critical to implementing useful business applications of Question Answering.

"Watson will also incorporate massively parallel analytical capabilities and, just like human competitors, Watson will not be connected to the Internet, or have any other outside assistance.  

"If we can teach a computer to play Jeopardy!, what could it mean for science, finance, healthcare and business? By drastically advancing the field of automatic question answering, the Watson project's ultimate success will be measured not by daily doubles, but by what it means for society" (http://www.research.ibm.com/deepqa/index.shtml, accessed 06-16-2010).

On June 16, 2010 The New York Times Magazine published a long article by Clive Thompson on IBM's Watson's challenge of humans in Jeopardy! entitled, in the question response language of Jeopardy!, "What is I.B.M.'s Watson?."

♦ In December 2013 answers to frequently asked questions concerning Watson and Jeopardy! were available from IBM's website at this link.

View Map + Bookmark Entry

Wolfram/Alpha is Launched May 16, 2009

On May 16, 2009 Stephen Wolfram and Wolfram Research, Champaign, Illinois, launched Wolfram|Alpha, a computational data engine with a new approach to knowledge extraction, based on natural language processing, a large library of algorithms, and an NKS (New Kind of Science) approach to answering queries.

The Wolfram|Alpha engine differed from traditional search engines in that it did not simply return a list of results based on a query, but instead computed an answer.

View Map + Bookmark Entry

An Algorithm to Decipher Ancient Texts September 2, 2009

"Researchers in Israel say they have developed a computer program that can decipher previously unreadable ancient texts and possibly lead the way to a Google-like search engine for historical documents.

"The program uses a pattern recognition algorithm similar to those law enforcement agencies have adopted to identify and compare fingerprints.

"But in this case, the program identifies letters, words and even handwriting styles, saving historians and liturgists hours of sitting and studying each manuscript.

"By recognizing such patterns, the computer can recreate with high accuracy portions of texts that faded over time or even those written over by later scribes, said Itay Bar-Yosef, one of the researchers from Ben-Gurion University of the Negev.

" 'The more texts the program analyses, the smarter and more accurate it gets,' Bar-Yosef said.

"The computer works with digital copies of the texts, assigning number values to each pixel of writing depending on how dark it is. It separates the writing from the background and then identifies individual lines, letters and words.

"It also analyses the handwriting and writing style, so it can 'fill in the blanks' of smeared or faded characters that are otherwise indiscernible, Bar-Yosef said.

"The team has focused their work on ancient Hebrew texts, but they say it can be used with other languages, as well. The team published its work, which is being further developed, most recently in the academic journal Pattern Recognition due out in December but already available online. A program for all academics could be ready in two years, Bar-Yosef said. And as libraries across the world move to digitize their collections, they say the program can drive an engine to search instantaneously any digital database of handwritten documents. Uri Ehrlich, an expert in ancient prayer texts who works with Bar-Yosef's team of computer scientists, said that with the help of the program, years of research could be done within a matter of minutes. 'When enough texts have been digitized, it will manage to combine fragments of books that have been scattered all over the world,' Ehrlich said" (http://www.reuters.com/article/newsOne/idUSTRE58141O20090902, accessed 09-02-2009).

View Map + Bookmark Entry

Google Introduces Google Goggles December 8, 2009

On Cember 8, 2009 Google introduced Google Goggles image recognition and search technology for the Android mobile device operating system.  If you photographed certain types of individual objects with your mobile phone the program would recognize them and automatically display links to relevant information on the Internet.If you pointed your phone at a building the program would identify it by GPS and identify it. Then if you clicked on the name of the building it would bring up relevant Internet links.

♦ On May 7, 2010 you could watch a video describing the features of Google Goggles at this link:


View Map + Bookmark Entry

2010 – 2012

"The Never-Ending Language Learning System" January 2010

Supported by DARPA and Google, in January 2010 Tom M. Mitchell and his team at Carnegie Mellon University initiated the Never-Ending Language Learning System, or NELL, in an effort to develop a method for machines to teach themselves semantics, or the meaning of language.

"Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day" (http://www.nytimes.com/2010/10/05/science/05compute.html?_r=1&hpw). 

"NELL has been in continuous operation since January 2010. For the first 6 months it was allowed to run without human supervision, learning to extract instances of a few hundred categories and relations, resulting in a knowledge base containing approximately a third of a million extracted instances of these categories and relations. At that point, it had improved substantially its ability to read three quarters of these categories and relations (with precision in the range 90% to 99%), but it had become inaccurate in extracting instances of the remaining fourth of the ontology (many had precisions in the range 25% to 60%).  

"The estimated precision of the beliefs it had added to its knowledge base at that point was 71%. We are still trying to understand what causes it to become increasingly competent at reading some types of information, but less accurate over time for others. Beginning in June, 2010, we began periodic review sessions every few weeks in which we would spend about 5 minutes scanning each category and relation. During this 5 minutes, we determined whether NELL was learning to read it fairly correctly, and in case not, we labeled the most blatant errors in the knowledge base. NELL now uses this human feedback in its ongoing training process, along with its own self-labeled examples. In July, a spot test showed the average precision of the knowledge base was approximately 87% over all categories and relations. We continue to add new categories and relations to the ontology over time, as NELL continues learning to populate its growing knowledge base" (http://rtw.ml.cmu.edu/rtw/overview, accessed 10-06-2010).

View Map + Bookmark Entry

"The World's First Full-Size Robotic Girlfriend" January 9, 2010

On January 9, 2010 Artificial intelligence engineer Douglas Hines of TrueCompanion.com introduced Roxxxy at the AVN Adult Entertainment Expo in Las Vegas, Nevada.

" 'She doesn't vacuum or cook, but she does almost everything else,' said her inventor, Douglas Hines, who unveiled Roxxxy last month at the Adult Entertainment Expo in Las Vegas, Nevada.

"Lifelike dolls, artificial sex organs and sex-chat phone lines have been keeping the lonely company for decades. But Roxxxy takes virtual companionship to a new level. Powered by a computer under her soft silicone skin, she employs voice-recognition and speech-synthesis software to answer questions and carry on conversations. She even comes loaded with five distinct 'personalities,' from Frigid Farrah to Wild Wendy, that can be programmed to suit customers' preferences.

" 'There's a tremendous need for this kind of product,' said Hines, a computer scientist and former Bell Labs engineer. Roxxxy won't be available for delivery for several months, but Hines is taking pre-orders through his Web site, TrueCompanion.com, where thousands of men have signed up. 'They're like, 'I can't wait to meet her,' ' Hines said. 'It's almost like the anticipation of a first date.' Women have inquired about ordering a sex robot, too. Hines says a female sex therapist even contacted him about buying one for her patients.

"Roxxxy has been like catnip to talk-show hosts since her debut at AEE, the largest porn-industry convention in the country. In a recent monologue, Jay Leno expressed amazement that a sex robot could carry on lifelike conversations and express realistic emotions. 'Luckily, guys,' he joked, 'there's a button that turns that off.' Curious conventioneers packed Hines' AEE booth last month in Las Vegas, asking questions and stroking Roxxxy's skin as she sat on a couch in a black negligee.

" 'Roxxxy generated a lot of buzz at AEE,' said Grace Lee, spokeswoman for the porn-industry convention. 'The prevailing sentiment of everyone I talked to about Roxxxy is 'version 1.0,' but people were fascinated by the concept, and it caused them to rethink the possibilities of 'sex toys.' '

"Hines, a self-professed happily married man from Lincoln Park, New Jersey, says he spent more than three years developing the robot after trying to find a marketable application for his artificial-intelligence technology. Roxxxy's body is made from hypoallergenic silicone -- the kind of stuff in prosthetic limbs -- molded over a rigid skeleton. She cannot move on her own but can be contorted into almost any natural position. To create her shape, a female model spent a week posing for a series of molds. The robot runs on a self-contained battery that lasts about three hours on one charge, Hines says. Customers can recharge Roxxxy with an electrical cord that plugs into her back.

"A motor in her chest pumps heated air through a tube that winds through the robot's body, which Hines says keeps her warm to the touch. Roxxxy also has sensors in her hands and genital areas -- yes, she is anatomically correct -- that will trigger vocal responses from her when touched. She even shudders to simulate orgasm. When someone speaks to Roxxxy, her computer converts the words to text and then uses pattern-recognition software to match them against a database containing hundreds of appropriate responses. The robot then answers aloud -- her prerecorded 'voice' is supplied by an unnamed radio host -- through a loudspeaker hidden under her wig.

" 'Everything you say to her is processed. It's very near real time, almost without delay,' Hines said of the dynamics of human-Roxxxy conversation. 'To make it as realistic as possible, she has different dialogue at different times. She talks in her sleep. She even snores.' (The snoring feature can be turned off, he says.) Roxxxy understands and speaks only English for now, but Hines' True Companion company is developing Japanese and Spanish versions. For an extra fee, he'll also record customizable dialogue and phrases for each client, which means Roxxxy could talk to you about NASCAR, say, or the intricacies of politics in the Middle East" (http://www.cnn.com/2010/TECH/02/01/sex.robot/, accessed 02-06-2010).

In December 2013 I revisited the Truecompanion.com website, which then advertised Roxxxy as "World's First Sex Robot: Always Turned on and Ready to Talk or Play." By then the company had diversified into three models of female sex robots, and was planning to introduce Rocky, a male sex robot: "Rocky is described as everyone's dream date! – just imagine putting together a great body along with a sparkling personality where your man is focused on making you happy!"

View Map + Bookmark Entry

The First Fragment of Contemporary Classical Music Composed by a Computer in its Own Style October 15, 2010

On October 15, 2010 the Iamus computer cluster developed by Francisco Vico and associates at the Universidad de Málaga, using the Melomics system, composed Opus One. This composition was arguably the first fragment of professional contemporary classical music ever composed by a computer in its own style, rather than emulating the style of existing composers.

"Melomics (derived from the genomics of melodies) is a propietary computational system for the automatic composition of music (with no human intervention), based on bioinspired methods and commercialized by Melomics Media" (Wikipedia article on Melomics, accessed 11-13-2013).

View Map + Bookmark Entry

Kinect for Xbox is Introduced November 4, 2010

On November 4, 2010 Microsoft introduced Kinect, a natural user interface providing full-body 3D motion capture, facial recognition, and voice recognition, for the Xbox 360 video game platform. The device featured an "RGB camera, depth sensor and multi-array microphone running proprietary software."  It enabled users to control and interact with the Xbox 360 without the need to touch a game controller.

"The system tracks 48 parts of your body in three-dimensional space. It doesn’t just know where your hand is, like the Wii. No, the Kinect tracks the motion of your head, hands, torso, waist, knees, feet and so on" (http://www.nytimes.com/2010/11/04/technology/personaltech/04pogue.html?scp=1&sq=kinect&st=cse, accessed 11-04-2010).

View Map + Bookmark Entry

Can an Artificial Intelligence Get into the University of Tokyo? 2011

In 2011 National Institute of Informatics in Japan initiated the Todai Robot Project with the goal of achieving a high score on the National Center Test for University Admissions by 2016, and passing the University of Tokyo entrance exam in 2021. 

"INTERVIEW WITH Yusuke Miyao, June 2013

Associate Professor, Digital Content and Media Sciences Research Division, NII; Associate Professor, Department of Informatics; "Todai Robot Project" Sub-Project Director 

Can a Robot Get Into the University of Tokyo? 
The Challenges Faced by the Todai Robot Project

Tainaka Could you tell us the objectives of the project?
Miyao We are researching the process of thinking by developing a computer program that will be able to pass the University of Tokyo entrance exam. The program will need to integrate multiple artificial intelligence technologies, such as language understanding, in order to develop all of the processes, from reading the question to determining the correct answer. While the process of thinking is first-nature to people, many of the processes involved in mental computation are still mysteries, so the project will be taking on challenges that previous artificial intelligence research has yet to touch.
Tainaka You're not going to making a physical robot?
Miyao No. What we'll be making is a robot brain. It won't be an actual robot that walks through the gate, goes to the testing site, picks up a pencil, and answers the questions.
Tainaka Why was passing the university entrance exam selected as the project's goal?
Miyao The key point is that what's difficult for people is different than what's difficult for computers. Computers excel at calculation, and can beat professional chess and shogi players at their games. IBM's "Watson" question-answering system*1 became a quiz show world champion. For a person, beating a professional shogi player is far harder than passing the University of Tokyo entrance exam, but for a computer, shogi is easier. What makes the University of Tokyo entrance exam harder is that the rules are less clearly defined than they are for shogi or a quiz show. From the perspective of using knowledge and data to answer questions, the university entrance exam requires a more human-like approach to information processing. However, it does not rely as much on common sense as an elementary school exam or everyday life, so it's a reasonable target for the next step in artificial intelligence research.
Tainaka Elementary school exam questions are more difficult?
Miyao For example, consider the sentence "Assuming there is a factory that can build 3 cars per day, how many days would it take to build 12 cars?" A computer would not be able to create a formula that expresses this in the same way a person could, near-instantaneously. It wouldn't understand the concepts of "car" or "factory", so it wouldn't be able to understand the relationship between them. Compared to that, calculating integrals is far easier.
Tainaka The National Center Test for University Admissions is multiple choice, and the second-stage exam is a short answer exam, right?
Miyao Of course, the center test is easier, and it has clear right and wrong answers, making it easier to grade. For the second-stage exam, examinees must give written answers, so during the latter half of the project, we will be shifting our focus on creating answers which are clear and comprehensible to human readers.
Tainaka Does the difficulty vary by test subject?
Miyao What varies more than the difficulty itself are the issues that have to be tackled by artificial intelligence research. The social studies questions, which test knowledge, rely on memory, so one might assume they would be easy for computers, but it's actually difficult for a computer to determine if the text of a problem corresponds to knowledge the computer possesses. What makes that identification possible is "Textual Entailment Recognition"*2, an area in which we are making progress, but still face many challenges. Ethics questions, on the other hand, frequently cover common sense, and require the reader to understand the Japanese language, so they are especially difficult for computers, which lack this common sense. Personally, I had a hard time with questions requiring memorization, so I picked ethics. (laughs)
Tainaka So ethics and language questions are difficult because they involve common sense.
Miyao Similar challenges are encountered with English, other than the common sense issue. For example, English questions include fill-in-the-blank questions, but it's difficult to pick natural conversational answers without actual life experience. Reading comprehension questions test logical and rational thought, but it's not really clear what this "logical and rational thought" consists of. The question, then, is how to teach "logical and rational thought" to computers. Also, for any subject, questions sometimes include photos, graphs, and comic strips. Humans understand them unconsciously, but it's extremely difficult to have computers understand them.
Tainaka Aren't mathematical formula questions easy to answer?
Miyao If they were presented as pure formulas, computers would excel at them, but the reality is not so simple. The questions themselves are written in natural language, making it difficult to map to the non-linguistic world of formulas. The same difficulty can be found with numerical fields, like physics or chemistry, or in fields which are difficult to convert into computer-interpretable symbols, such as the emotional and situational experience of reading a novel. That's what makes elementary school exams difficult.
Tainaka There are a mountain of problems.
Miyao There are many problems that nobody has yet taken on. That's what makes it challenging, and it's very exciting working with people from different fields. Looking at the practical results of this project, our discoveries and developments will be adapted for use in general purpose systems, such as meaning-based searching and conversation systems, real-world robot interfaces, and the like. The Todai Robot Project covers a diverse range of research fields, and NII plans to build an infrastructure, organizing data and creating platforms, and bring in researchers from both inside and outside Japan to achieve our objectives. In the future we will build an even more open platform, creating opportunities for members of the general public to participate as well, and I hope anyone motivated will take part" (http://21robot.org/introduce/NII-Interview/, accessed 12-30-2013).
View Map + Bookmark Entry

IBM's Watson Question Answering System Defeats Humans at Jeopardy! February 14 – February 16, 2011

LOn February 14, 2011 IBM's Watson question answering system supercomputer, developed at IBM's T J Watson Research Center, Yorktown Heights, New York, running DeepQA software, defeated the two best human Jeopardy! players, Ken Jennings and Brad Rutter. Watson's hardware consisted of 90 IBM Power 750 Express servers. Each server utilized a 3.5 GHz POWER7 eight-core processor, with four threads per core. The system operatesd with 16 terabytes of RAM.

The success of the machine underlines very significant advances in deep analytics and the ability of a machine to process unstructured data, and especially to intepret and speak natural language.

"Watson is an effort by I.B.M. researchers to advance a set of techniques used to process human language. It provides striking evidence that computing systems will no longer be limited to responding to simple commands. Machines will increasingly be able to pick apart jargon, nuance and even riddles. In attacking the problem of the ambiguity of human language, computer science is now closing in on what researchers refer to as the “Paris Hilton problem” — the ability, for example, to determine whether a query is being made by someone who is trying to reserve a hotel in France, or simply to pass time surfing the Internet.  

"If, as many predict, Watson defeats its human opponents on Wednesday, much will be made of the philosophical consequences of the machine’s achievement. Moreover, the I.B.M. demonstration also foretells profound sociological and economic changes.  

"Traditionally, economists have argued that while new forms of automation may displace jobs in the short run, over longer periods of time economic growth and job creation have continued to outpace any job-killing technologies. For example, over the past century and a half the shift from being a largely agrarian society to one in which less than 1 percent of the United States labor force is in agriculture is frequently cited as evidence of the economy’s ability to reinvent itself.  

"That, however, was before machines began to 'understand' human language. Rapid progress in natural language processing is beginning to lead to a new wave of automation that promises to transform areas of the economy that have until now been untouched by technological change.  

" 'As designers of tools and products and technologies we should think more about these issues,' said Pattie Maes, a computer scientist at the M.I.T. Media Lab. Not only do designers face ethical issues, she argues, but increasingly as skills that were once exclusively human are simulated by machines, their designers are faced with the challenge of rethinking what it means to be human.  

"I.B.M.’s executives have said they intend to commercialize Watson to provide a new class of question-answering systems in business, education and medicine. The repercussions of such technology are unknown, but it is possible, for example, to envision systems that replace not only human experts, but hundreds of thousands of well-paying jobs throughout the economy and around the globe. Virtually any job that now involves answering questions and conducting commercial transactions by telephone will soon be at risk. It is only necessary to consider how quickly A.T.M.’s displaced human bank tellers to have an idea of what could happen" (John Markoff,"A Fight to Win the Future: Computers vs. Humans," http://www.nytimes.com/2011/02/15/science/15essay.html?hp, accessed 02-17-2011).

♦ As a result of this technological triumph, IBM took the unusal step of building a colorful website concerning all aspects of Watson, including numerous embedded videos.

♦ A few of many articles on the match published during or immediately after it included:

John Markoff, "Computer Wins on 'Jeopardy!': Trivial, It's Not," http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?hpw

Samara Lynn, "Dissecting IBM Watson's Jeopardy! Game," PC Magazinehttp://www.pcmag.com/article2/0,2817,2380351,00.asp

John C. Dvorak, "Watson is Creaming the Humans. I Cry Foul," PC Magazinehttp://www.pcmag.com/article2/0,2817,2380451,00.asp

Henry Lieberman published a three-part article in MIT Technology Review, "A Worthwhile Contest for Artificial Intelligence" http://www.technologyreview.com/blog/guest/26391/?nlid=4132

♦ An article which discussed the weaknesses of Watson versus a human in Jeopardy! was Greg Lindsay, "How I Beat IBM's Watson at Jeopardy! (3 Times)" http://www.fastcompany.com/1726969/how-i-beat-ibms-watson-at-jeopardy-3-times

♦ An opinion column emphasizing the limitations of Watson compared to the human brain was Stanley Fish, "What Did Watson the Computer Do?" http://opinionator.blogs.nytimes.com/2011/02/21/what-did-watson-the-computer-do/

♦ A critical response to Stanley Fish's column by Sean Dorrance Kelly and Hubert Dreyfus, author of What Computers Can't Dowas published in The New York Times at: http://opinionator.blogs.nytimes.com/2011/02/28/watson-still-cant-think/?nl=opinion&emc=tya1

View Map + Bookmark Entry

The Impact of Automation on Legal Research March 4, 2011

"Armies of Expensive Lawyers Replaced by Cheaper Software," an article by John Markoff published in The New York Times, discussed the use of "e-discovery" (ediscovery) software which uses artificial intelligence to analyze millions of electronic documents from the linguistic, conceptual and sociological standpoint in a fraction of the time and at a fraction of the cost of the hundreds of lawyers previously required to do the task.

"These new forms of automation have renewed the debate over the economic consequences of technological progress.  

"David H. Autor, an economics professor at the Massachusetts Institute of Technology, says the United States economy is being 'hollowed out.' New jobs, he says, are coming at the bottom of the economic pyramid, jobs in the middle are being lost to automation and outsourcing, and now job growth at the top is slowing because of automation.  

" 'There is no reason to think that technology creates unemployment,' Professor Autor said. 'Over the long run we find things for people to do. The harder question is, does changing technology always lead to better jobs? The answer is no.'

"Automation of higher-level jobs is accelerating because of progress in computer science and linguistics. Only recently have researchers been able to test and refine algorithms on vast data samples, including a huge trove of e-mail from the Enron Corporation. 

“ 'The economic impact will be huge,' said Tom Mitchell, chairman of the machine learning department at Carnegie Mellon University in Pittsburgh. 'We’re at the beginning of a 10-year period where we’re going to transition from computers that can’t understand language to a point where computers can understand quite a bit about language.'

View Map + Bookmark Entry

The Impact of Artificial Intelligence and Automation on Jobs March 6, 2011

In an op-ed column called Degrees and Dollars published in The New York Times Nobel Prize winning economist Paul Krugman of Princeton wrote concerning the impact of artificial intelligence and automation on jobs:

"The fact is that since 1990 or so the U.S. job market has been characterized not by a general rise in the demand for skill, but by “hollowing out”: both high-wage and low-wage employment have grown rapidly, but medium-wage jobs — the kinds of jobs we count on to support a strong middle class — have lagged behind. And the hole in the middle has been getting wider: many of the high-wage occupations that grew rapidly in the 1990s have seen much slower growth recently, even as growth in low-wage employment has accelerated."

"Some years ago, however, the economists David Autor, Frank Levy and Richard Murnane argued that this was the wrong way to think about it. Computers, they pointed out, excel at routine tasks, “cognitive and manual tasks that can be accomplished by following explicit rules.” Therefore, any routine task — a category that includes many white-collar, nonmanual jobs — is in the firing line. Conversely, jobs that can’t be carried out by following explicit rules — a category that includes many kinds of manual labor, from truck drivers to janitors — will tend to grow even in the face of technological progress.  

"And here’s the thing: Most of the manual labor still being done in our economy seems to be of the kind that’s hard to automate. Notably, with production workers in manufacturing down to about 6 percent of U.S. employment, there aren’t many assembly-line jobs left to lose. Meanwhile, quite a lot of white-collar work currently carried out by well-educated, relatively well-paid workers may soon be computerized. Roombas are cute, but robot janitors are a long way off; computerized legal research and computer-aided medical diagnosis are already here.

"And then there’s globalization. Once, only manufacturing workers needed to worry about competition from overseas, but the combination of computers and telecommunications has made it possible to provide many services at long range. And research by my Princeton colleagues Alan Blinder and Alan Krueger suggests that high-wage jobs performed by highly educated workers are, if anything, more “offshorable” than jobs done by low-paid, less-educated workers. If they’re right, growing international trade in services will further hollow out the U.S. job market."

View Map + Bookmark Entry

The First Neurosynaptic Chips August 2011

In August 2011, as part of the SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) project, IBM researchers led by Dharmendra S. Modha, manager and lead researcher of the Cognitive Computing Group at IBM Almaden Research Center, demonstrated two neurosynaptic cores that moved beyond von Neumann architecture and programming to ultra-low, super-dense brain-inspired cognitive computing chips. These new silicon, neurosynaptic chips would be the building blocks for computing systems that emulate the brain's computing efficiency, size and power usage.

View Map + Bookmark Entry

Free Online Artificial Intelligence Course Attracts 58,000 Students August 15, 2011

Sebastian Thrun, Research Professor Computer Science at Stanford and a leading roboticist, and Peter Norvig, Director of Research at Google, Inc., in partnership with the Stanford University School of Engineering, offered a free online course entitled An Introduction to Artificial Intelligence

According to an article by John Markoff in The New York Times, by August 15, 2011 more than 58,000 students from around the world registered for this free course— nearly four times Stanford's entire student body.

"The online students will not get Stanford grades or credit, but they will be ranked in comparison to the work of other online students and will receive a 'statement of accomplishment.'

"For the artificial intelligence course, students may need some higher math, like linear algebra and probability theory, but there are no restrictions to online participation. So far, the age range is from high school to retirees, and the course has attracted interest from more than 175 countries" (http://www.nytimes.com/2011/08/16/science/16stanford.html?hpw, accessed 08-16-2011).

One fairly obvious reason why so many studients signed up is that Norvig is famous in the field as the co-author with Stuart Russell of the standard textbook on AI, Artificial Intelligence: A Modern Approach (first edition: 1995), which has been translated into many languages and has sold over 200,000 copies.

View Map + Bookmark Entry

Toward Cognitive Computing Systems August 18, 2011

On August 18, 2011 "IBM researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. The technology could yield many orders of magnitude less power consumption and space than used in today’s computers. 

"In a sharp departure from traditional concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry. Its first two prototype chips have already been fabricated and are currently undergoing testing.  

"Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember – and learn from – the outcomes, mimicking the brains structural and synaptic plasticity.  

"To do this, IBM is combining principles from nanoscience, neuroscience and supercomputing as part of a multi-year cognitive computing initiative. The company and its university collaborators also announced they have been awarded approximately $21 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

"The goal of SyNAPSE is to create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment – all while rivaling the brain’s compact size and low power usage. The IBM team has already successfully completed Phases 0 and 1.  

" 'This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century,' said Dharmendra Modha, project leader for IBM Research. 'Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture. These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government.' " (http://www-03.ibm.com/press/us/en/pressrelease/35251.wss, accessed 08-21-2011).

View Map + Bookmark Entry

The First Complete Album Composed Solely by Computer and Recorded by Human Musicians September 2011 – July 2, 2012

In September 2011 the Iamus computer cluster developed by Francisco Vico and associates at the Universidad de Málaga produced a composition entitled Hello World! This classical clarinet-volin-piano trio was called the first full-scale work entirely composed by a computer without any human intervention, and automatically written in a fully-fledged score using conventional musical notation.

Several months later, on July 2, 2012 four compositions by the Iamus computer premiered, and were broadcast live from the School of Computer Science at Universidad de Málaga, as one of the events included in the Alan Turing year. The compositions performed at this event were later recorded by the London Symphony Orchestra, and issued in 2012 as the album entitled Iamus. This compact disc was characterized by the New Scientist as the "first complete album to be composed solely by a computer and recorded by human musicians."

Commenting on the authenticity of the music, Stephen Smoliar, critic of classical music at The San Francisco Examiner, wrote in a piece entitled "Thoughts about Iamus and the composition of music by computer," Examiner.com, January 4, 2013:

"However, where listening is concerned, the method leading to the notation is secondary. What is primary is the act of making the music itself engaged by the performers and how the listener responds to what those performers do. Put another way, the music is in the performance, rather than in the composition without which that performance would not take place. The issue is not, as Smith seems to imply at the end of her BBC report, whether 'a computer could become a more prodigious composer than Mozart, Haydn, Brahms and Beethoven combined.' The computer is only prodigious at creating more documents, and what is most interesting about the documents generated by Iamus is their capacity to challenge the creative talents of performing musicians."

View Map + Bookmark Entry

The First Commercial Application of the IBM Watson Question Answering System: Medical Diagnostics September 12, 2011

Health Care insurance provider WellPoint, Inc. and IBM announced an agreement to create the first commercial applications of the IBM Watson question answering system. Under the agreement, WellPoint would develop and launch Watson-based solutions to help improve patient care through the delivery of up-to-date, evidence-based health care for millions of Americans, while IBM would develop the Watson healthcare technology on which WellPoint's solution will run.

View Map + Bookmark Entry

A Silicon Chip that Mimics How the Brain's Synapses Change in Response to New Information November 2011

In November 2011, a group of MIT researchers created the first computer chip that mimicked how the brain's neurons adapt in response to new information. This biological phenomenon, known as plasticity, is analog, ion-based communication in a synapse between two neurons. With about 400 transistors, the silicon chip can simulated the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. 

"There are about 100 billion neurons in the brain, each of which forms synapses with many other neurons. A synapse is the gap between two neurons (known as the presynaptic and postsynaptic neurons). The presynaptic neuron releases neurotransmitters, such as glutamate and GABA, which bind to receptors on the postsynaptic cell membrane, activating ion channels. Opening and closing those channels changes the cell’s electrical potential. If the potential changes dramatically enough, the cell fires an electrical impulse called an action potential.

"All of this synaptic activity depends on the ion channels, which control the flow of charged atoms such as sodium, potassium and calcium. Those channels are also key to two processes known as long-term potentiation (LTP) and long-term depression (LTD), which strengthen and weaken synapses, respectively. "

"The MIT researchers designed their computer chip so that the transistors could mimic the activity of different ion channels. While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell. 

“ 'We can tweak the parameters of the circuit to match specific ion channels,” Poon says. 'We now have a way to capture each and every ionic process that’s going on in a neuron.'

"Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says" (http://www.mit.edu/newsoffice/2011/brain-chip-1115.html, accessed 01-01-2014).

Rachmuth, G., Shouvai, H., Bear, M., Poon, C. "A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity," Proceedings of the National Academy of Sciences 108, no. 459, December 6, 2011, E1266-E1274, doi: 10.1073/pnas.1106161108

View Map + Bookmark Entry

IBM's Watson Question Answering System to Team with Cedars-Sinai Oschin Comprehensive Cancer Institute December 16, 2011

Health Insurance provider WellPoint announced that the Cedars-Sinai Samuel Oschin Comprehensive Cancer Institute in Los Angeles would provide clinical expertise to help shape WellPoint's new health care solutions utilizing IBM's Watson question answering system.

"It is estimated that new clinical research and medical information doubles every five years, and nowhere is this knowledge advancing more quickly than in the complex area of cancer care.  

"WellPoint believes oncology is one of the medical fields that could greatly benefit from this technology, given IBM Watson's ability to respond to inquiries posed in natural language and to learn from the responses it generates. The WellPoint health care solutions will draw from vast libraries of information including medical evidence-based scientific and health care data, and clinical insights from institutions like Cedars-Sinai. The goal is to assist physicians in evaluating evidence-based treatment options that can be delivered to the physician in a matter of seconds for assessment. WellPoint and Cedars-Sinai envision that this valuable enhancement to the decision-making process could empower physician-patient discussions about the best and most effective courses of treatment and improve the overall quality of patient care.  

"Cedars-Sinai was selected as WellPoint's partner based on its reputation as one of the nation's premier cancer institutions and its proven results in the diagnosis and treatment of complex cancers. Cedars-Sinai has experience and demonstrated success in working with technology innovators and shares WellPoint's commitment to improving the quality, efficiency and effectiveness of health care through innovation and technology.  

"Cedars-Sinai's oncology experts will help develop recommendations on appropriate clinical content for the WellPoint health care solutions. They will also assist in the evaluation and testing of the specific tools that WellPoint plans to develop for the oncology field utilizing IBM's Watson technology. The Cedars-Sinai cancer experts will enter hypothetical patient scenarios, evaluate the proposed treatment options generated by IBM Watson, and provide guidance on how to improve the content and utility of the treatment options provided to the physicians.  

"Leading Cedars-Sinai's efforts is M. William Audeh, M.D., medical director of its Samuel Oschin Comprehensive Cancer Institute. Dr. Audeh will work closely with WellPoint's clinical experts to provide advice on how the solutions may be best utilized in clinical practice to support increased understanding of the evolving body of knowledge in cancer, including emerging therapies not widely known by community physicians. As the solutions are developed, Dr. Audeh will also provide guidance on how the make the WellPoint offering useful and practical for physicians and patients.

" 'As we design the WellPoint systems that leverage IBM Watson's capabilities, it is essential that we incorporate the highly-specialized knowledge and real-life practice experiences of the nation's premier clinical experts,' said Harlan Levine, MD, executive vice president of WellPoint's Comprehensive Health Solutions. 'The contributions from Dr. Audeh, coupled with the expertise throughout Cedars-Sinai's Samuel Oschin Comprehensive Cancer Institute, will be invaluable to implementing this WellPoint offering and could ultimately benefit millions of Americans across the country.'

"WellPoint anticipates deploying their first offering next year, working with select physician groups in clinical pilots" (http://ir.wellpoint.com/phoenix.zhtml?c=130104&p=irol-newsArticle&ID=1640553&highlight=, accessed 12-17-2011).

View Map + Bookmark Entry

2012 – 2016

A Large Scale Neural Network Appears to Emulate Activity in the Visual Cortex June 26, 2012

At the International Conference on Machine Learning held in Edinburgh, Scotland from June 26–July 1, 2012 researchers at Google and Stanford University reported that they developed software modeled on the way biological neurons interact with each other that taught itself to distinguish objects in ­YouTube videos. Although it was most effective recognizing cats and human faces, the system obtained 15.8% accuracy in recognizing 22,000 object categories from ImageNet, or 3,200 items in all, a 70 percent improvement over the previous best-performing software. To do so the scientists connected 16,000 computer processors to create a neural network for machine learning with more than one billion connections. Then they turned the neural network loose on the Internet to learn on its own.

Having been presented with the experimental results before the meeting, on June 25, 2012 John Markoff published an article entitled "How Many Computers to Identify a Cat? 16,000," from which I quote selections"

"Presented with 10 million digital images selected from YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats....

"The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.

"Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech....

"The [YouTube] videos were selected randomly and that in itself is an interesting comment on what interests humans in the Internet age. However, the research is also striking. That is because the software-based neural network created by the researchers appeared to closely mirror theories developed by biologists that suggest individual neurons are trained inside the brain to detect significant objects.

"Currently much commercial machine vision technology is done by having humans 'supervise' the learning process by labeling specific features. In the Google research, the machine was given no help in identifying features.

“ 'The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,' Dr. Ng said.

“ 'We never told it during the training, ‘This is a cat,’ ' said Dr. Dean, who originally helped Google design the software that lets it easily break programs into many tasks that can be computed simultaneously. 'It basically invented the concept of a cat. We probably have other ones that are side views of cats.'

"The Google brain assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images. The scientists said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain’s visual cortex.

"Neuroscientists have discussed the possibility of what they call the 'grandmother neuron,' specialized cells in the brain that fire when they are exposed repeatedly or “trained” to recognize a particular face of an individual.

“ 'You learn to identify a friend through repetition,' said Gary Bradski, a neuroscientist at Industrial Perception, in Palo Alto, Calif.

"While the scientists were struck by the parallel emergence of the cat images, as well as human faces and body parts in specific memory regions of their computer model, Dr. Ng said he was cautious about drawing parallels between his software system and biological life.

“ 'A loose and frankly awful analogy is that our numerical parameters correspond to synapses,' said Dr. Ng. He noted that one difference was that despite the immense computing capacity that the scientists used, it was still dwarfed by the number of connections found in the brain.

“ 'It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,' the researchers wrote.

"Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.

“ 'The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts,' said David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing. He said that rapid increases in computer technology would close the gap within a relatively short period of time: “The scale of modeling the full human visual cortex may be within reach before the end of the decade.”

"Google scientists said that the research project had now moved out of the Google X laboratory and was being pursued in the division that houses the company’s search business and related services. Potential applications include improvements to image search, speech recognition and machine language translation.

"Despite their success, the Google researchers remained cautious about whether they had hit upon the holy grail of machines that can teach themselves.

“ 'It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet,' said Dr. Ng.

Quoc V. Le,  Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeff DeanAndrew Y. Ng, "Building High-level Features Using Large Scale Upervised Learning," arXiv:1112.6209 [cs.LG] 12 July 2012.  

View Map + Bookmark Entry

Using 100 Linked Computers and Artificial Intelligence to Re-Assemble Fragments from the Cairo Genizah May 2013

For years I have followed computer applications in the humanities. Some, such as From Cave Paintings to the Internet, are on a small personal scale. Others involve  enormous corpora of data, as in computational linguistics, where larger seems always to be better.

The project called "Re-joining the Cairo Genizah", a joint venture of Genazim, The Friedberg Genizah Project, founded in 1999 in Toronto, Canada, and The Blavatnik School of Computer Science at Tel-Aviv University, seems potentially to be one of the most productive large scale projects currently underway.  Because about 320,000 pages and parts of pages from the Genizah — in Hebrew, Aramaic, and Judeo-Arabic (Arabic transliterated into Hebrew letters) — are scattered in 67 libraries and private collections around the world, only a fraction of them have been collated and cataloged. Though approximately 200 books were published on the Genizah manuscripts by 2013, perhaps only 4,000 of the manuscripts were pieced together through a painstaking, expensive, exclusive process that relied a lot on luck.

In 2013 the Genazim project  was underway to collate and piece together as many of these fragments as could be re-assembled using current computing technology:

"First there was a computerized inventory of 301,000 fragments, some as small as an inch. Next came 450,000 high-quality photographs, on blue backgrounds to highlight visual cues, and a Web site where researchers can browse, compare, and consult thousands of bibliographic citations of published material.  

"The latest experiment involves more than 100 linked computers located in a basement room at Tel Aviv University here, cooled by standup fans. They are analyzing 500 visual cues for each of 157,514 fragments, to check a total of 12,405,251,341 possible pairings. The process began May 16 and should be done around June 25, according to an estimate on the project’s Web site.  

"Yaacov Choueka, a retired professor of computer science who runs the Friedberg-financed Genazim project in Jerusalem, said the goals are not only to democratize access to the documents and speed up the elusive challenge of joining fragments, but to harness the computer’s ability to pose new research questions. . . .

"Another developing technology is a 'jigsaw puzzle' feature, with touch-screen technology that lets users enlarge, turn and skew fragments to see if they fit together. Professor Choueka, who was born in Cairo in 1936, imagines that someday soon such screens will be available alongside every genizah collection. And why not a genizah-jigsaw app for smartphones?

“ 'The thing it really makes possible is people from all walks of life, in academia and out, to look at unpublished material,' said Ben Outhwaite, head of the Genizah Research Unit at Cambridge University, home to 60 percent of the fragments. 'No longer are we going to see a few great scholarly names hoarding particular parts of the genizah and have to wait 20 years for their definitive publication. Now everyone can dive in.'

"What they will find goes far beyond Judaica. . . . Marina Rustow, a historian at Johns Hopkins University, said about 15,000 genizah fragments deal with everyday, nonreligious matters, most of them dated 950 to 1250. From these, she said, scholars learned that Cairenes imported sheep cheese from Sicily — it was deemed kosher — and filled containers at the bazaar with warm food in an early version of takeout" (http://www.nytimes.com/2013/05/27/world/middleeast/computers-piecing-together-jigsaw-of-jewish-lore.html?pagewanted=2&hp, accessed 05-27-2013)

View Map + Bookmark Entry

The Growing Economic and Social Impact of Artificial Intelligence December 29, 2013

On December 29, 2013 The New York Times published an article by Michael Fitzpatrick on Japan's Todai Robot Project entitled "Computers Jump to the Head of the Class." This was the first article that I ever read that spelled out the potential dystopian impact of advances in artificial intelligence on traditional employment and also on education. Because the article was relatively brief I decided to quote it in full:

"TOKYO — If a computer could ace the entrance exam for a top university, what would that mean for mere mortals with average intellects? This is a question that has bothered Noriko Arai, a mathematics professor, ever since the notion entered her head three years ago.

“I wanted to get a clear image of how many of our intellectual activities will be replaced by machines. That is why I started the project: Can a Computer Enter Tokyo University? — the Todai Robot Project,” she said in a recent interview.

Tokyo University, known as Todai, is Japan’s best. Its exacting entry test requires years of cramming to pass and can defeat even the most erudite. Most current computers, trained in data crunching, fail to understand its natural language tasks altogether.

Ms. Arai has set researchers at Japan’s National Institute of Informatics, where she works, the task of developing a machine that can jump the lofty Todai bar by 2021.

If they succeed, she said, such a machine should be capable, with appropriate programming, of doing many — perhaps most — jobs now done by university graduates.

With the development of artificial intelligence, computers are starting to crack human skills like information summarization and language processing.

Given the exponential growth of computing power and advances in artificial intelligence, or A.I., programs, the Todai robot’s task, though daunting, is feasible, Ms. Arai says. So far her protégé, a desktop computer named Todai-kun, is excelling in math and history but needs more effort in reading comprehension.

There is a significant danger, Ms. Arai says, that the widespread adoption of artificial intelligence, if not well managed, could lead to a radical restructuring of economic activity and the job market, outpacing the ability of social and education systems to adjust.

Intelligent machines could be used to replace expensive human resources, potentially undermining the economic value of much vocational education, Ms. Arai said.

“Educational investment will not be attractive to those without unique skills,” she said. Graduates, she noted, need to earn a return on their investment in training: “But instead they will lose jobs, replaced by information simulation. They will stay uneducated.”

In such a scenario, high-salary jobs would remain for those equipped with problem-solving skills, she predicted. But many common tasks now done by college graduates might vanish.

“We do not know in which areas human beings outperform machines. That means we cannot prepare for the changes,” she said. “Even during the industrial revolution change was a lot slower.”

Over the next 10 to 20 years, “10 percent to 20 percent pushed out of work by A.I. will be a catastrophe,” she says. “I can’t begin to think what 50 percent would mean — way beyond a catastrophe and such numbers can’t be ruled out if A.I. performs well in the future.”

She is not alone in such an assessment. A recent study published by the Program on the Impacts of Future Technology, at Oxford University’s Oxford Martin School, predicted that nearly half of all jobs in the United States could be replaced by computers over the next two decades.

Some researchers disagree. Kazumasa Oguro, professor of economics at Hosei University in Tokyo, argues that smart machines should increase employment. “Most economists believe in the principle of comparative advantage,” he said. “Smart machines would help create 20 percent new white-collar jobs because they expand the economy. That’s comparative advantage.”

Others are less sanguine. Noriyuki Yanagawa, professor of economics at Tokyo University, says that Japan, with its large service sector, is particularly vulnerable.

“A.I. will change the labor demand drastically and quickly,” he said. “For many workers, adjusting to the drastic change will be extremely difficult.”

Smart machines will give companies “the opportunity to automate many tasks, redesign jobs, and do things never before possible even with the best human work forces,” according to a report this year by the business consulting firm McKinsey.

Advances in speech recognition, translation and pattern recognition threaten employment in the service sectors — call centers, marketing and sales — precisely the sectors that provide most jobs in developed economies. As if to confirm this shift from manpower to silicon power, corporate investment in the United States in equipment and software has never been higher, according to Andrew McAfee, the co-author of “Race Against the Machine” — a cautionary tale for the digitized economy.

Yet according to the technology market research firm Gartner, top business executives worldwide have not grasped the speed of digital change or its potential impact on the workplace. Gartner’s 2013 chief executive survey, published in April, found that 60 percent of executives surveyed dismissed as “‘futurist fantasy” the possibility that smart machines could displace many white-collar employees within 15 years.

“Most business and thought leaders underestimate the potential of smart machines to take over millions of middle-class jobs in the coming decades,” Kenneth Brant, research director at Gartner, told a conference in October: “Job destruction will happen at a faster pace, with machine-driven job elimination overwhelming the market’s ability to create valuable new ones.”

Optimists say this could lead to the ultimate elimination of work — an “Athens without the slaves” — and a possible boom for less vocational-style education. Mr. Brant’s hope is that such disruption might lead to a system where individuals are paid a citizen stipend and be free for education and self-realization.

“This optimistic scenario I call Homo Ludens, or ‘Man, the Player,’ because maybe we will not be the smartest thing on the planet after all,” he said. “Maybe our destiny is to create the smartest thing on the planet and use it to follow a course of self-actualization.”

View Map + Bookmark Entry

A Neural Network that Reads Millions of Street Numbers January 1, 2014

To read millions of street numbers on buildings photographed for Google StreetView, Google built a neural network that developed reading accuracy comparable to humans assigned to the task. The company uses the images to read house numbers and match them to their geolocation, storing the geolocation of each building in its database. Having the street numbers matched to physical location on a map is always useful, but it is particularly useful in places where street numbers are otherwise unavailable, or in places such as Japan and South Korea, where streets are rarely numbered in chronological order, but in other ways, such as the order in which they were constructed— a system that makes many buildings impossibly hard to find, even for locals.

"Recognizing arbitrary multi-character text in unconstrained natural photographs is a hard problem. In this paper, we address an equally hard sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from Street View imagery. Traditional approaches to solve this problem typically separate out the localization, segmentation, and recognition steps. In this paper we propose a unified approach that integrates these three steps via the use of a deep convolutional neural network that operates directly on the image pixels. We employ the DistBelief implementation of deep neural networks in order to train large, distributed neural networks on high quality images. We find that the performance of this approach increases with the depth of the convolutional network, with the best performance occurring in the deepest architecture we trained, with eleven hidden layers. We evaluate this approach on the publicly available SVHN dataset and achieve over 96% accuracy in recognizing complete street numbers. We show that on a per-digit recognition task, we improve upon the state-of-the-art and achieve 97.84% accuracy. We also evaluate this approach on an even more challenging dataset generated from Street View imagery containing several tens of millions of street number annotations and achieve over 90% accuracy. Our evaluations further indicate that at specific operating thresholds, the performance of the proposed system is comparable to that of human operators. To date, our system has helped us extract close to 100 million physical street numbers from Street View imagery worldwide."

Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, Vinay Shet, "Multi-digit Number Recognition from Street ViewImagery using Deep Convolutional Neural Networks," arXiv:1312.6082v2.

View Map + Bookmark Entry

DeepFace, Facial Verification Software Developed at Facebook, Approaches Human Ability March 17, 2014

On March 17, 2014 MIT Technology Review published an article by Tim Simonite on Facebook's facial recognition software, DeepFace, which I quote:

"Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.

"That’s a significant advance over previous face-matching software, and it demonstrates the power of a new approach to artificial intelligence known as deep learning, which Facebook and its competitors have bet heavily on in the past year (see 'Deep Learning'). This area of AI involves software that uses networks of simulated neurons to learn to recognize patterns in large amounts of data.

"'You normally don’t see that sort of improvement,' says Yaniv Taigman, a member of Facebook’s AI team, a research group created last year to explore how deep learning might help the company (see 'Facebook Launches Advanced AI Effort'). 'We closely approach human performance,' says Taigman of the new software. He notes that the error rate has been reduced by more than a quarter relative to earlier software that can take on the same task.

"Facebook’s new software, known as DeepFace, performs what researchers call facial verification (it recognizes that two images show the same face), not facial recognition (putting a name to a face). But some of the underlying techniques could be applied to that problem, says Taigman, and might therefore improve Facebook’s accuracy at suggesting whom users should tag in a newly uploaded photo.

"However, DeepFace remains purely a research project for now. Facebook released a research paper on the project last week, and the researchers will present the work at the IEEE Conference on Computer Vision and Pattern Recognition in June. 'We are publishing our results to get feedback from the research community,' says Taigman, who developed DeepFace along with Facebook colleagues Ming Yang and Marc’Aurelio Ranzato and Tel Aviv University professor Lior Wolf.

"DeepFace processes images of faces in two steps. First it corrects the angle of a face so that the person in the picture faces forward, using a 3-D model of an 'average' forward-looking face. Then the deep learning comes in as a simulated neural network works out a numerical description of the reoriented face. If DeepFace comes up with similar enough descriptions from two different images, it decides they must show the same face.

"The performance of the final software was tested against a standard data set that researchers use to benchmark face-processing software, which has also been used to measure how humans fare at matching faces.

"Neeraj Kumar, a researcher at the University of Washington who has worked on face verification and recognition, says that Facebook’s results show how finding enough data to feed into a large neural network can allow for significant improvements in machine-learning software. 'I’d bet that a lot of the gain here comes from what deep learning generally provides: being able to leverage huge amounts of outside data in a much higher-capacity learning model,' he says.

"The deep-learning part of DeepFace consists of nine layers of simple simulated neurons, with more than 120 million connections between them. To train that network, Facebook’s researchers tapped a tiny slice of data from their company’s hoard of user images—four million photos of faces belonging to almost 4,000 people. 'Since they have access to lots of data of this form, they can successfully train a high-capacity model,' says Kumar.

View Map + Bookmark Entry

IBM Launches "Watson Discovery Advisor" to Hasten Breakthroughs in Scientific and Medical Research August 27, 2014

On August 27, 2014 IBM launched Watson Discovery Advisor, a computer system that could quickly identify patterns in massive amounts of data, with the expectation that this system would hasten breakthroughs in science and medical research. The computer system, which IBM made available through the cloud, understood chemical compound interaction and human language, and could visually map out connections in data. The system used  a number of computational techniques to deliver its results, including natural language processing, machine learning and hypothesis generation, in which a hypothesis is created and evaluated by a number of different analysis techniques. Baylor College of Medicine used the service to analyze 23 million abstracts of medical papers in order to find more information on the p53 tumor protein, in search of more information on how to turn it on or off. From these results, Baylor researchers identified six potential proteins to target for new research. Using traditional methods it typically took researchers about a year to find a single potentially useful target protein, IBM said.

According to an article by Reuters published in The New York Times,

"Some researchers and scientists have already been using Watson Discovery Advisor to sift through the sludge of scientific papers published daily.

"Johnson & Johnson is teaching the system to read and understand trial outcomes published in journals to speed up studies of effectiveness of drugs.

"Sanofi, a French pharmaceutical company is working with Watson to identify alternate uses for existing drugs.

" 'On average, a scientist might read between one and five research papers on a good day,' said Dr. Olivier Lichtarge, investigator and professor of molecular and human genetics, biochemistry and molecular biology at Baylor College of Medicine.

"He used Watson to automatically analyze 70,000 articles on a particular protein, a process which could have taken him nearly 38 years.

“ 'Watson has demonstrated the potential to accelerate the rate and the quality of breakthrough discoveries,' he said."

View Map + Bookmark Entry

Three Breakthroughs that Finally Unleased AI on the World October 27, 2014

In "The Three Breakthroughs That Have Finally Unleased AI on the World", Wired Magazine, October 27, 2014, writer Kevin Kelly of Pacifica, California explained how breakthroughs in cheap parallel computation, big data, and better algorithms were enabling new AI-based services that were previously the domain of sci-fi and academic white papers. Within the near future AI would play greater and greater roles in aspects of everyday life, in products like Watson developed by IBM, and products from Google, Facebook and other companies. More significant than these observations were Kelly's views about the impact that these developments would have on our lives and how we may understand the difference between machine and human intelligence:

"If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers. Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists.

"In fact, this won't really be intelligence, at least not as we've come to think of it. Indeed, intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them—and our most premium AI services will likely be advertised as consciousness-free.

"What we want instead of intelligence is artificial smartness. Unlike general intelligence, smartness is focused, measurable, specific. It also can think in ways completely different from human cognition. A cute example of this nonhuman thinking is a cool stunt that was performed at the South by Southwest festival in Austin, Texas, in March of this year. IBM researchers overlaid Watson with a culinary database comprising online recipes, USDA nutritional facts, and flavor research on what makes compounds taste pleasant. From this pile of data, Watson dreamed up novel dishes based on flavor profiles and patterns from existing dishes, and willing human chefs cooked them. One crowd favorite generated from Watson's mind was a tasty version of fish and chips using ceviche and fried plantains. For lunch at the IBM labs in Yorktown Heights I slurped down that one and another tasty Watson invention: Swiss/Thai asparagus quiche. Not bad! It's unlikely that either one would ever have occurred to humans.

"Nonhuman intelligence is not a bug, it's a feature. The chief virtue of AIs will be their alien intelligence. An AI will think about food differently than any chef, allowing us to think about food differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science and art. The alienness of artificial intelligence will become more valuable to us than its speed or power. . . .

View Map + Bookmark Entry

Google Develops A Neural Image Caption Generator to Translate Images into Words November 17, 2014

Having previously transformed the machine translation process by developing algorithms from vector space mathematics, in November 2014 Oriol Vinyals and colleagues at Google in Mountain View developed a neural image caption generator to translate images into words. Google's machine translation approach is:

"essentially to count how often words appear next to, or close to, other words and then define them in an abstract vector space in relation to each other. This allows every word to be represented by a vector in this space and sentences to be represented by combinations of vectors.

"Google goes on to make an important assumption. This is that specific words have the same relationship to each other regardless of the language. For example, the vector “king - man + woman = queen” should hold true in all languages. . . .

"Now Oriol Vinyals and pals at Google are using a similar approach to translate images into words. Their technique is to use a neural network to study a dataset of 100,000 images and their captions and so learn how to classify the content of images.

"But instead of producing a set of words that describe the image, their algorithm produces a vector that represents the relationship between the words. This vector can then be plugged into Google’s existing translation algorithm to produce a caption in English, or indeed in any other language. In effect, Google’s machine learning approach has learnt to “translate” images into words.

To test the efficacy of this approach, they used human evaluators recruited from Amazon’s Mechanical Turk to rate captions generated automatically in this way along with those generated by other automated approaches and by humans.

"The results show that the new system, which Google calls Neural Image Caption, fares well. Using a well known dataset of images called PASCAL, Neural image Capture clearly outperformed other automated approaches. “NIC yielded a BLEU score of 59, to be compared to the current state-of-the-art of 25, while human performance reaches 69,” says Vinyals and co" (http://www.technologyreview.com/view/532886/how-google-translates-pictures-into-words-using-vector-space-mathematics/, accessed 01-14-2015).

Vinyals et al, "Show and Tell: A Neural Image Captional Generator" (2014) http://arxiv.org/pdf/1411.4555v1.pdf

"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In thispaper we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used
to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify
both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU score improvements on Flickr30k, from 55 to 66, and on SBU, from 19 to 27" (Abstract).

View Map + Bookmark Entry

Skype Previews Skype Translator Real Time Translation Program December 15, 2014

On December 15, 2014 Skype, a division of Microsoft, announced the first phase of the Skype Translator preview program. In the first phase the program was available only for English and Spanish. The result of 15 years of work at Microsoft Research in deep learning, transfer learning, speech recognition, machine translation, and speech synthesis, Microsoft demonstrated this technology at the Code Conference, and posted videos of the demonstration on its blog on May 27, 2014: 

View Map + Bookmark Entry

A Computer Masters Heads-up Limit Texas Hold 'em Poker January 8, 2015

A breakthrough in artificial intelligence published in January 2015 allowed a computer to master the simplest two-person version of the poker game known as Texas Hold'em working through every possible variation of play to make the perfect move every time. When performed without mistakes, just like the childhood game tic-tac-toe, there’s no way to lose. In this case the player is Cepheus, an algorithm designed by Canadian researchers.

We have a strategy that can guarantee a player won’t lose,” said Michael Bowling, a computer scientist from the University of Alberta, who led a team working on the program. “It’s going to be a break-even game. It’s only when someone makes a mistake that they could end up losing.

Michael BowlingNeil BurchMichael JohansonOskari Tammelin, "Heads-up limit hold'em poker is solved," Science 347, no. 6218 (2015) 145-149 

"Poker is a family of games that exhibit imperfect information, where players do not have full knowledge of past events. Whereas many perfect-information games have been solved (e.g., Connect Four and checkers), no nontrivial imperfect-information game played competitively by humans has previously been solved. Here, we announce that heads-up limit Texas hold’em is now essentially weakly solved. Furthermore, this computation formally proves the common wisdom that the dealer in the game holds a substantial advantage. This result was enabled by a new algorithm, CFR, which is capable of solving extensive-form games orders of magnitude larger than previously possible" (Abstract).

See also: http://news.sciencemag.org/math/2015/01/texas-hold-em-poker-solved-computer, accessed 01-14-2015.

View Map + Bookmark Entry

A Machine Vision Algorithm Learns to Attribute Paintings to Specific Artists May 2015

In May 2015 Babak Saleh and Ahmed Elgammal of the Department of Compuer Science, Rutgers University, described an algorithm that could recognize the Style, Genre, and Artist of a painting.

"Saleh and Elgammal begin with a database of images of more than 80,000 paintings by more than a 1,000 artists spanning 15 centuries. These paintings cover 27 different styles, each with more than 1,500 examples. The researchers also classify the works by genre, such as interior, cityscape, landscape, and so on.

"They then take a subset of the images and use them to train various kinds of state-of-the-art machine-learning algorithms to pick out certain features. These include general, low-level features such as the overall color, as well as more advanced features that describe the objects in the image, such as a horse and a cross. The end result is a vector-like description of each painting that contains 400 different dimensions.

"The researchers then test the algorithm on a set of paintings it has not yet seen. And the results are impressive. Their new approach can accurately identify the artist in over 60 percent of the paintings it sees and identify the style in 45 percent of them.

"But crucially, the machine-learning approach provides an insight into the nature of fine art that is otherwise hard even for humans to develop. This comes from analyzing the paintings that the algorithm finds difficult to classify.

"For example, Saleh and Elgammal say their new approach finds it hard to distinguish between works painted by Camille Pissarro and Claude Monet. But a little research on these artists quickly reveals both were active in France in the late 19th and early 20th centuries and that both attended the Académie Suisse in Paris. An expert might also know that Pissarro and Monet were good friends and shared many experiences that informed their art. So the fact that their work is similar is no surprise.

"As another example, the new approach confuses works by Claude Monet and the American impressionist Childe Hassam, who, it turns out, was strongly influenced by the French impressionists and Monet in particular.  These are links that might take a human some time to discover" (MIT Technology Review May 11, 2015).

Saleh, Babak, and Elgammal, Ahmed," Large-scale Classification of Fine-Art Paintings; Learning the Right Metric on the Right Feature" (http://arxiv.org/pdf/1505.00855v1.pdf, 5 May 2015.

View Map + Bookmark Entry

Based on a Single Example, A. I. Surpasses Human Capabilities in Reading and Copying Written Characters December 11, 2015

On December 11, 2015 Brenden M. Lake, Rusian Salakhutdinov, and Joshua B. Tenenbaum,  from MIT and New York University reported advances in artificial intelligence that surprassed human capabilities in reading and copying written characters. The key advance was that the algorithm outperformed humans in identifying written characters based on a single example. Until this time, machine learning algorithms typically required tens or hundres of examples to perform with similar accuracy.

Lake, Salakutdinov, Tenebaum, "Human-level concept learning through probabilistic program induction", Science, 11 December 2015, 350, no. 6266, 1332-1338.  On December 14, 2015 the entire text of this extraordinary paper was freely available online. I quote the first 3 paragraphs:

"Despite remarkable advances in artificial intelligence and machine learning, two aspects of human conceptual knowledge have eluded machine systems. First, for most interesting kinds of natural and man-made categories, people can learn a new concept from just one or a handful of examples, whereas standard algorithms in machine learning require tens or hundreds of examples to perform similarly. For instance, people may only need to see one example of a novel two-wheeled vehicle (Fig. 1A) in order to grasp the boundaries of the new concept, and even children can make meaningful generalizations via “one-shot learning” (13). In contrast, many of the leading approaches in machine learning are also the most data-hungry, especially “deep learning” models that have achieved new levels of performance on object and speech recognition benchmarks (49). Second, people learn richer representations than machines do, even for simple concepts (Fig. 1B), using them for a wider range of functions, including (Fig. 1, ii) creating new exemplars (10), (Fig. 1, iii) parsing objects into parts and relations (11), and (Fig. 1, iv) creating new abstract categories of objects based on existing categories (1213). In contrast, the best machine classifiers do not perform these additional functions, which are rarely studied and usually require specialized algorithms. A central challenge is to explain these two aspects of human-level concept learning: How do people learn new concepts from just one or a few examples? And how do people learn such abstract, rich, and flexible representations? An even greater challenge arises when putting them together: How can learning succeed from such sparse data yet also produce such rich representations? For any theory of learning (41416), fitting a more complicated model requires more data, not less, in order to achieve some measure of good generalization, usually the difference in performance between new and old examples. Nonetheless, people seem to navigate this trade-off with remarkable agility, learning rich concepts that generalize well from sparse data.

"This paper introduces the Bayesian program learning (BPL) framework, capable of learning a large class of visual concepts from just a single example and generalizing in ways that are mostly indistinguishable from people. Concepts are represented as simple probabilistic programs— that is, probabilistic generative models expressed as structured procedures in an abstract description language (1718). Our framework brings together three key ideas—compositionality, causality, and learning to learn—that have been separately influential in cognitive science and machine learning over the past several decades (1922). As programs, rich concepts can be built “compositionally” from simpler primitives. Their probabilistic semantics handle noise and support creative generalizations in a procedural form that (unlike other probabilistic models) naturally captures the abstract “causal” structure of the real-world processes that produce examples of a category. Learning proceeds by constructing programs that best explain the observations under a Bayesian criterion, and the model “learns to learn” (2324) by developing hierarchical priors that allow previous experience with related concepts to ease learning of new concepts (2526). These priors represent a learned inductive bias (27) that abstracts the key regularities and dimensions of variation holding across both types of concepts and across instances (or tokens) of a concept in a given domain. In short, BPL can construct new programs by reusing the pieces of existing ones, capturing the causal and compositional properties of real-world generative processes operating on multiple scales.

I"n addition to developing the approach sketched above, we directly compared people, BPL, and other computational approaches on a set of five challenging concept learning tasks (Fig. 1B). The tasks use simple visual concepts from Omniglot, a data set we collected of multiple examples of 1623 handwritten characters from 50 writing systems (Fig. 2)(see acknowledgments). Both images and pen strokes were collected (see below) as detailed in section S1 of the online supplementary materials. Handwritten characters are well suited for comparing human and machine learning on a relatively even footing: They are both cognitively natural and often used as a benchmark for comparing learning algorithms. Whereas machine learning algorithms are typically evaluated after hundreds or thousands of training examples per class (5), we evaluated the tasks of classification, parsing (Fig. 1B, iii), and generation (Fig. 1B, ii) of new examples in their most challenging form: after just one example of a new concept. We also investigated more creative tasks that asked people and computational models to generate new concepts (Fig. 1B, iv). BPL was compared with three deep learning models, a classic pattern recognition algorithm, and various lesioned versions of the model—a breadth of comparisons that serve to isolate the role of each modeling ingredient (see section S4 for descriptions of alternative models). We compare with two varieties of deep convolutional networks (28), representative of the current leading approaches to object recognition (7), and a hierarchical deep (HD) model (29), a probabilistic model needed for our more generative tasks and specialized for one-shot learning."


View Map + Bookmark Entry

2016 – Present

DeepMind's AI Algorithm Masters the Ancient Game of Go January 27, 2016

On January 27, 2016 the artificial intelligence company DeepMind, a division of Google based in London, announced that its AI progam AlphaGo mastered the ancient Chinese game of Go.

"Traditional AI methods—which construct a search tree over all possible positions—don’t have a chance in Go. So when we set out to crack Go, we took a different approach. We built a system, AlphaGo, that combines an advanced tree search with deep neural networks. These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections. One neural network, the “policy network,” selects the next move to play. The other neural network, the “value network,” predicts the winner of the game.

"We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent). But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning. Of course, all of this requires a huge amount of computing power, so we made extensive use of Google Cloud Platform.

"After all that training it was time to put AlphaGo to the test. First, we held a tournament between AlphaGo and the other top programs at the forefront of computer Go. AlphaGo won all but one of its 500 games against these programs. So the next step was to invite the reigning three-time European Go champion Fan Hui—an elite professional player who has devoted his life to Go since the age of 12—to our London office for a challenge match. In a closed-doors match last October, AlphaGo won by 5 games to 0. It was the first time a computer program has ever beaten a professional Go player. You can find out more in our paper, which was published in Nature today....

"We are thrilled to have mastered Go and thus achieved one of the grand challenges of AI. However, the most significant aspect of all this for us is that AlphaGo isn’t just an“expert” system built with hand-crafted rules; instead it uses general machine learning techniques to figure out for itself how to win at Go. While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems. Because the methods we’ve used are general-purpose, our hope is that one day they could be extended to help us address some of society’s toughest and most pressing problems, from climate modelling to complex disease analysis. We’re excited to see what we can use this technology to tackle next!. Posted by Demis Hassabis, Google DeepMind" (https://googleblog.blogspot.com/2016/01/alphago-machine-learning-game-go.html, accessed 02-09-2016).

View Map + Bookmark Entry

What Google's DeepMind Learned in Seoul with AlphaGo March 16, 2016

What we learned in Seoul with AlphaGo

March 16, 2016
Go isn’t just a game—it’s a living, breathing culture of players, analysts, fans, and legends. Over the last 10 days in Seoul, South Korea, we’ve been lucky enough to witness some of that incredible excitement firsthand. We've also had the chance to see something that's never happened before: DeepMind's AlphaGo took on and defeated legendary Go player, Lee Sedol (9-dan professional with 18 world titles), marking a major milestone for artificial intelligence.
Pedestrians checking in on the AlphaGo vs. Lee Sedol Go match on the streets of Seoul (March 13)

Go may be one of the oldest games in existence, but the attention to our five-game tournament exceeded even our wildest imaginations. Searches for Go rules and Go boards spiked in the U.S. In China, tens of millions watched live streams of the matches, and the “Man vs. Machine Go Showdown” hashtag saw 200 million pageviews on Sina Weibo. Sales of Go boards even surged in Korea.

Our public test of AlphaGo, however, was about more than winning at Go. We founded DeepMind in 2010 to create general-purpose artificial intelligence (AI) that can learn on its own—and, eventually, be used as a tool to help society solve some of its biggest and most pressing problems, from climate change to disease diagnosis.

Like many researchers before us, we've been developing and testing our algorithms through games. We first revealed AlphaGo in January—the first AI program that could beat a professional player at the most complex board game mankind has devised, using deep learning and reinforcement learning. The ultimate challenge was for AlphaGo to take on the best Go player of the past decade—Lee Sedol.

To everyone's surprise, including ours, AlphaGo won four of the five games. Commentators noted that AlphaGo played many unprecedented, creative, and even“beautiful” moves. Based on our data, AlphaGo’s bold move 37 in Game 2 had a 1 in 10,000 chance of being played by a human. Lee countered with innovative moves of his own, such as his move 78 against AlphaGo in Game 4—again, a 1 in 10,000 chance of being played—which ultimately resulted in a win.

The final score was 4-1. We're contributing the $1 million in prize money to organizations that support science, technology, engineering and math (STEM) education and Go, as well as UNICEF.

We’ve learned two important things from this experience. First, this test bodes well for AI’s potential in solving other problems. AlphaGo has the ability to look “globally” across a board—and find solutions that humans either have been trained not to play or would not consider. This has huge potential for using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas. Second, while the match has been widely billed as "man vs. machine," AlphaGo is really a human achievement. Lee Sedol and the AlphaGo team both pushed each other toward new ideas, opportunities and solutions—and in the long run that's something we all stand to benefit from.

But as they say about Go in Korean: “Don’t be arrogant when you win or you’ll lose your luck.” This is just one small, albeit significant, step along the way to making machines smart. We’ve demonstrated that our cutting edge deep reinforcement learning techniques can be used to make strong Go and Atari players. Deep neural networks are already used at Google for specific tasks—like image recognitionspeech recognition, and Search ranking. However, we’re still a long way from a machine that can learn to flexibly perform the full range of intellectual tasks a human can—the hallmark of trueartificial general intelligence.
Demis and Lee Sedol hold up the signed Go board from the Google DeepMind Challenge Match

With this tournament, we wanted to test the limits of AlphaGo. The genius of Lee Sedol did that brilliantly—and we’ll spend the next few weeks studying the games he and AlphaGo played in detail. And because the machine learning methods we’ve used in AlphaGo are general purpose, we hope to apply some of these techniques to other challenges in the future. Game on!

View Map + Bookmark Entry