4406 entries. 94 themes. Last updated December 26, 2016.

Human-Computer Interaction Timeline


1850 – 1875

Samuel Butler Novel "Erewhon" Describes Artificial Consciousness 1872

In 1872 Erewhon: or, Over the Range, a satirical utopian novel by the English writer Samuel Butler, was published anonymously in London. A notable aspect of this satire on aspects of Victorian society, expanded from letters that Butler originally published in the New Zealand newspaper, The Press, was that Erewhonians believed that machines were potentially dangerous and that Erewhonian society had undergone a revolution that destroyed most mechanical inventions. In the section of Butler's satire called "The Book of the Machines" Butler appears to have imagined the possiblity of machine consciousness, or artificial consciousness, and that machines could replicate themselves

View Map + Bookmark Entry

1900 – 1910

Revealing a Hidden Image in a 1901 Painting by Picasso in a 2012 Newspaper Article 1901 – October 24, 2012

Since 1989 conservators and art historians have known that hidden beneath the surface of Picasso's “Woman Ironing”  preserved in the Solomon R. Guggenheim Museum, New York, is the upside-down ghost of another painting — a three-quarter-length portrait of a man with a mustache. The hidden image was first seen in photographs of this painting from Picasso's Blue Period (1901-1904) taken with an infrared camera in 1989.  

On October 24, 2012 The New York Times published an article by Carol Vogel on this painting and the painting hidden underneath entitled "Under One Picasso, Another."  From the standpoint of this database on the history of media what I find most interesting about this is the "interactive feature" published in association with the article entitled "Scratching the Surface, Two Picassos Revealed."

A very clever imaging program in the interactive feature invited the reader to "click and drag your mouse over the painting to see what was hidden beneath it." As I wiped the top image of the painting off with mouse strokes the painting underneath was revealed.  I could also rotate the image and reset it back to the top layer.

View Map + Bookmark Entry

1920 – 1930

Introduction of the Word "Robot" 1920

In 1920 Czech novelist, playwright, journalist and translator Karel Capek published R. U. R. (Rossum’s Universal Robots) in Prague. This play, written in Czech except for the title, introduced the word “robot” and explored the issue of whether worker-machines would replace people.

View Map + Bookmark Entry

1940 – 1950

Key Developments in Jay W. Forrester's Project Whirlwind 1943

In 1943 Project Whirlwind began as an analog flight simulator project at MIT. About November 1945 the project switched from analog to digital electronics. Formal design of the machine began in 1947.

By 1950 Project Whirlwind was in limited operation at MIT as a general purpose computer. It was the first computer that operated in real time, with the first video display for output, and it was the first computer that was not just an electronic replacement of older mechanical systems. On April 20, 1951 Whirlwind offically began operation at MIT. Whirlwind I included the first primitive graphical display on its vectorscope screen.

In 1952 three-dimensional magnetic-core memory replaced electrostatic memory on the Whirlwind I, leading to increased performance and reliability. 

In 1954 programmers J. H. Laning and Neil Zierler developed an algebraic compiler for theWhirlwind I—the first high-level algebraic language for a computer.

View Map + Bookmark Entry

1950 – 1960

The First OCR System: "GISMO" 1951

In 1951 American inventor David Hammond Shepard, a cryptanalyst at AFSA, the forerunner of the U.S. National Security Agency (NSA), built "Gismo" in his spare time.

Gismo was a machine to convert printed messages into machine language for processing by computer— the first optical character recognition (OCR) system.

"IBM licensed the [OCR] machine, but never put it into production. Shepard designed the Farrington B numeric font now used on most credit cards. Recognition was more reliable on a simple and open font, to avoid the effects of smearing at gasoline station pumps. Reading credit cards was the first major industry use of OCR, although today the information is read magnetically from the back of the cards.

"In 1962 Shepard founded Cognitronics Corporation. In 1964 his patented 'Conversation Machine' was the first to provide telephone Interactive voice response access to computer stored data using speech recognition. The first words recognized were 'yes' and 'no' " (Wikipedia article on David H. Shepard, accessed 02-29-2012).

View Map + Bookmark Entry

The First Graphical Computer Game 1952

In 1952 A. S. Douglas wrote Noughts and Crosses, the first graphical computer game, on the cathode ray tube (CRT) screen of the EDSAC at Cambridge University.

View Map + Bookmark Entry

The First Trackball 1952 – 1953

In 1952 British electrical engineer Kenyon Taylor and team, working on the Royal Canadian Navy's DATAR project (a pioneering computerized battlefield information system) invented the first trackball, a precursor of the computer mouse. It used a standard Canadian five-pin bowling ball. The DATAR system was first successfully tested on Lake Ontario in autumn 1953.

View Map + Bookmark Entry

Applying Computer Methods to Library Cataloguing and Research June 24 – June 27, 1952

At a meeting of the Medical Library Association that took place from June 24-27, 1952 physician and librarian Sanford Larkey reported on progress in the Welch Medical Library Indexing Project which had begun in 1949. This project was probably the earliest attempt to apply punched card tabulating in library cataloguing and information retrieval.

"The goal of the project, of which I was a member until its termination in 1953, was to develop computer-derived indexes to the scientific and medical literature. This mechanization of bibliographic information involved the use of IBM tabulating equipment designed for statistical analysis. The Welch project used standard punched-card machines to
prepare subject-heading lists for the Armed Forces Medical Library, the precursor to the National Library of Medicine (E. Garfield, "The preparation of subject-heading lists by automatic punched-card techniques," Journal of Documentation, 10:1-10, 1954)" (Garfield, "Tribute to Calvin N. Mooers, A Pioneer of Information Retrieval", The Scientist, Vol11, #6 (March 17, 1997) 9).

In Larkey's 1952 report there is a very interesting section which he called the "Psychology of Machines", which I quote:

"I think I should say something about 'machines' themselves at this point. Since we are using machines in all the major phases of our work, I should like to describe the machines we are using and just how we are using them. I will discuss the present status of each phase of our work primarily on the basis of the machine operations involved. Another reason for this approach is that we are have found in discussing our program with others, our use of machines seems either to interest or worry people more than any other feature.

"This brings me to what might be called the 'psychology of machines.' The very word 'machines' seems to do things to people. We hear talk of 'electronic robots,' as though they were some sort of 'men from Mars' who could take over all intellectual activities by merely pushing buttons. This sort of talk leads to excessive hopes or to inordinate fears and precludes objective thinking about the possible uses of machines. One should consider machines as practice adjuncts, as we do typewriters, 3 x 5 cards, and visible indexes. Machines are only doing very rapidly what one could do with his own eyes and brain if had all the time in the world to do it and wanted to do it. There is no magic about it.

"There is, however, a more valid psychological aspect to machines. Since machines operate on a strict yes-or-no principle, we must be rigidly exact in presenting a problem. Each step must be in the most precise logical form, since one rarely can stop to correct as one goes along. Each step must be gone over and over in relation to every other one. One has to think not once, but many times. Programming often takes almost as long as the machine operation itself, but the end result is still reached much more quickly than by manual operations.

"These strict limitations of machines have been very useful to us. They not only have tightened up our own thinking processes, but their application has emphasized many semantic inconsistencies in our terminology and classifications. So, perhaps there may be a good psychological side to machines." (pp. 33-34).

View Map + Bookmark Entry

Probably the First Computer-Controlled Aesthetic System 1953 – 1957

Between 1953 and 1957 English cybernetician and psychologist Gordon Pask, in collaboration with Robin McKinnon-Wood, created Musicolour, a reactive system for theatre productions, or a computer-controlled aesthetic system, that "drove an array of lights that adapted to a musician's performance" (Mason, a computer in the art room. the origins of british computer arts 1950-1980 [2008] 6). This was one of the earliest examples of "computer art." The system's analog computer was transported from performance to performance.

Pask discussed and explained Musicolour in A comment, a case history and a plan (1968) written before the Cybernetic Serendipity exhibition (1968) in which Musicolour was demonstrated. However the text was not published in the catalogue of that exhibition. It was first published in Reichardt ed., Cybernetics: Art and Ideas (1971) 76-99.

Pickering, The Cybernetic Brain. Sketches of Another Future (2010) 313-324.

(This entry was last revised on 08-14-2014.)

View Map + Bookmark Entry

The First Light Pen 1954 – 1963

In 1954 development began for NORAD on the SAGE Air Defense System, using a computer built by IBM after a design based on the Whirlwind. The system included the first light pen.

The full SAGE (Semi-Automatic Ground Environment) automated control system for tracking and intercepting enemy bomber aircraft was completed by 1963.

View Map + Bookmark Entry

Intelligence Amplification by Machines 1956

In 1956 English psychiatrist and cybernetician W[illiam] Ross Ashby wrote of intelligence amplification by machines in his book, An Introduction to Cybernetics.

View Map + Bookmark Entry

SAGE: Physically the Largest Computers Ever Built 1957

In 1957 the first SAGE (Semi-Automatic Ground Environment)  AN/FSQ-7 (DC-01) computer was operational on a limited basis for the SAGE Air Defense System at McGuire Air Force Base in Burlington County, New Jersey.  Twenty AN/FSQ-7s would eventually be built. The AN/FSQ-7 computer contained 55,000 vacuum tubes, occupied 0.5 acres (2,000 m2) of of floor space, weighed 275 tons, and used up to three megawatts of power. Performance was about 75,000 instructions per second. From the standpoint of physical dimensions, the fifty-two AN/FSQ-7s remain the largest computers ever built.  

"Although the machines used a large number of vacuum tubes, the failure rate of an individual tube was low due to efforts in quality control and a novel quality assurance system called marginal checking that discovered tubes that were growing weak, before they failed. Each SAGE site included two computers for redundancy, with one processor on "hot standby" at all times. In spite of the poor reliability of the tubes, this dual-processor design made for remarkably high overall system uptime. 99% availability was not unusual."

The system allowed online access, in graphical form, to data transmitted to and processed by its computers. Fully deployed by 1963, the IBM-built early warning system remained operational until 1984. With 23 direction centers situated on the northern, eastern, and western boundaries of the United States, SAGE pioneered the use of computer control over large, geographically distributed systems.

"Both MIT and IBM supported the project as contractors. IBM's role in SAGE (the design and manufacture of the AN/FSQ-7 computer, a vacuum tube computer with ferrite core memory based on the never-built Whirlwind II) was an important factor leading to IBM's domination of the computer industry, accounting for more than a half billion dollars in revenue, nearly 10% of IBM's income in the late 1950s" (Wikipedia article on Semi-Automatic Ground Environment, accessed 03-03-2012).

View Map + Bookmark Entry

The TX-2 Computer for the Study of Human-Computer Interaction 1959

In 1959 Wesley A. Clark designed and built the TX-2 computer at MIT’s Lincoln Laboratory in Lexington, Massachusetts. It had 320 kilobytes of fast memory, about twice the capacity of the biggest commercial machines. Other features were magnetic tape storage, an online typewriter, the first Xerox printer, paper tape for program input, and a nine inch CRT screen. Among its applications were development of interactive graphics and research on human-computer interaction.

View Map + Bookmark Entry

Human Versus Machine Intelligence and Communication (1959) 1959

"Somewhat the same problem arises in communicating with a machine entity that would arise in communicating with a person of an entirely different language background than your own. A system of logical definition and translation would have to be available. In order that meanings should not be lost, such a system of translation would also need to be precise. We are all familiar with the unhappy results of language translations which are either lacking in precision or where suitable words of equivalent meaning cannot be found. Likewise, translating into a machine language cannot be anything but an exact operation. Machines even more than people must be addressed with clarity and unambiguity, for machines cannot improvise on their own or imagine that about which they have not been specifically informed, as a human might do within reasonable limits of error. . . .

"We must now ascertain how concepts are formulated within the framework of computer language. For analogy, let us first consider the manner in which instructions are usually given to a non-mechanical entity. When we instruct, for example, a human being, we are aided by the fact that the human is usually able to fill in gaps in our instructions through acumen acquired from his own past experiences. It is seldom necessary that instructions be either detailed or literal, although we may have lost sight of this fact.

"The computer in a correlate example is a mechanical 'being' which must be instructed at each and every step. But it can be given a very long list of instructions upon which it can be expected to subsequently act with great speed and accuracy and with untiring repetition. Machine traits are: low comprehension, high retention, extreme reliability, and tremendous speed. The use of superlatives here to describe these traits is not exaggerative. Since speed becomes in practice the equivalent of number, the machine might be, and has sometimes been, equated to legions — an army, if you will — of lowgrade morons whose conceptualization is entirely literal, who remember as long as is necessary or as you desire them to, whose loyalty and subservience is complete, who require no holidays, no spurious incentives, no morale programs, pensions, not even gratitude for past service, and who seemingly never tire of doing elementary repetitive tasks such as typing, accounting, bookkeeping, arithmetic, filling in forms, and the like. In about all these respects the machine may be seen to be the exact opposite of nature's loftiest creature, the intellligent human being, who becomes bored with the petty and repetitious, who is unreliable, who wanders from the task for the most trivial reasons, who gets out of humor, who forgets, who requires constant incentives and rewards, who improvises on his own even when to do so is impertinent to the objectives being undertaken, and who in summary (let's face it) is unsuitable to most forms of industry as the latter are ideally and practically conceived in our times. It becomes apparent in retrospect that the only excuse we might ever have had for employing him to do many of civilization's more literal and repetitious tasks was the absence of something more efficient with which to replace him!

"It is not the purpose of this volume to explore further the ramifications of the above statements of fact. . . ."(Nett & Hetzler, An Introduction to Electronic Data Processing [1959] 86-88).

View Map + Bookmark Entry

Highlights of the Digital Equipment Corporation PDP Series of Minicomputers December 1959 – 1975

In December 1959, at the Eastern Joint Computer Conference in Boston, Digital Equipment Corporation (DEC) of Maynard, Massachusetts, demonstrated the prototype of its first computer, the PDP-1 (Programmed Data Processor-1), designed by a team headed by Ben Gurley.

"The launch of the PDP-1 (Programmed Data Processor-1) computer in 1959 marked a radical shift in the philosophy of computer design: it was the first commercial computer that focused on interaction with the user rather than the efficient use of computer cycles" (http://www.computerhistory.org/collections/decpdp-1/, accessed 06-25-2009).

Selling for $120,000, the PDP-1 was a commercialization of the TX-O and TX-2 computers designed at MIT’s Lincoln Laboratory. On advice from the venture-capital firm that financed the company, DEC did not call it a “computer,” but instead called the machine a “programmed data processor.” The PDP-1 was credited as being the most important in the creation of hacker culture. 

In 1963 DEC introduced the PDP-5, it's first 12-bit computer. The PDP-5 was later called “the world’s first commercially produced minicomputer.” However, the PDP-8 introduced in 1965 was also given this designation.

Two years later, in 1965 DEC introduced the PDP-8, the first “production model minicomputer.” “Small in physical size, selling in minimum configuration for under $20,000.”

In 1970 DEC (Digital Equipment Corporation) of Maynard, Massachusetts, introduced the PDP-11minicomputer, which popularized the notion of a “bus” (i.e.“Unibus”) onto which a variety of additional circuit boards or peripheral products could be placed. DEC sold 20,000 PDP-11s by 1975.

View Map + Bookmark Entry

1960 – 1970

PLATO 1: The First Electronic Learning System 1960

In 1960 PLATO I (Programmed Logic for Automatic Teaching Operations), the first electronic learning system, developed by Donald Bitzer, operated on the ILLIAC 1 at the University of Illinois at Urbana-Champaign. Plato I included a television for a display, and a special system to navigate the system's menu. It serviced a single user. In 1961 PLATO II allowed two students to operate the system at one time.

View Map + Bookmark Entry

Licklider Describes "Man-Computer Symbiosis" March 1960

In March 1960 computer scientist J. C. R. Licklider of Bolt, Baranek and Newman published "Man-Computer Symbiosis," IRE Transactions on Human Factors in Electronics, volume HFE-1 (March 1960) 4-11, postulating that the computer should become an intimate symbiotic partner in human activity, including communication. (See Reading 10.5.)

View Map + Bookmark Entry

Licklider & Clark Publish "Online Man-Computer Communication" Circa June 1962

About June 1962 J.C.R. Licklider of Bolt, Baranek, and Newman and Welden E. Clark published “Online Man-Computer Communication,” calling for time-sharing of computers, for graphic displays of information, and the need for an improved graphical interface. (See Reading 10.6.)

View Map + Bookmark Entry

Douglas Engelbart Issues "Augmenting Human Intellect: A Conceptual Framework" October 1962

In October 1962 Douglas Engelbart of the Stanford Research Institute, Menlo Park, California, completed his report, Augmenting Human Intellect: A Conceptual Framework, for the Director of Information Sciences, Air Force Office of Scientific Research. This report led J. C. R. Licklider of DARPA to fund SRI's Augmentation Research Center.

View Map + Bookmark Entry

"The potential contributions of computers depend upon their use by very human human beings." November 1962

In November 1962 electrical engineer David L. Johnson and clinical-social psychologist Arthur L. Kobler, both at the University of Washington, Seattle, published "The Man-Computer Relationship. The potential contributions of computers crucially depend upon their use by very human human beings," Science 138 (1962) 873-79. The introductory and concluding sections of the paper are quoted below:

"Recently Norbert Wiener, 13 years after publication of his Cybernetics, took stock of the man-computer relationship [Science 131, 1355 (1960).] He concluded, with genuine concern, that computers may be getting out of hand. In emphasizing the significance of the position of the computer in our world, Wiener comments on the crucial use of computers by the military: 'it is more than likely that the machine may produce a policy which would win a nominal victory on points at the cost of every interest we have at heart, even that of national survival.' 

"Computers are used by man; man must be considered a part of any system in which they are used. Increasingly in our business, scientific, and international life the results of data processing and computer application are, necessarily and properly, touching the individuals of our society significantly. Increasing application of computers is inevitable and requisite for the growth and progress of our society. The purpose of this article is to point out certain cautions which must be observed and certain paths which must be emphasized if the man-computer relationship is to develop to its full positive potential and if Wiener's prediction is to be proved false. In this article on the problem of decision making we set forth several concepts. We have chosen decision making as a suitable area of investigation because we see both man and machine, in all their behavior actions, constantly making decisions. We see the process of decision making as being always the same: within the limits of the field, possibilities exist from which choices are made. Moreover, there are many decisions of great significance being made in which machines are already playing an active part. For example, a military leader recently remarked, "At the heart of every defense system you will find a computer." In a recent speech the president of the National Machine Accountants Association stated that 80 to 90 percent of the executive decisions in U.S. industry would soon be made by machines. Such statements indicate a growing trend-a trend which need not be disadvantageous to human beings if they maintain proper perspective. In the interest of making the man-machine relationship optimally productive and satisfactory to the human being, it is necessary to examine the unique capabilities of both man and machine, giving careful attention to the resultant interaction within the
mixed system."


"The levels of human knowledge of the environment and the universe are increasing, and it is obviously necessary that man's ability to cope with this knowledge should increase—necessary for his usefulness and for his very survival. The processes of automation have provided a functional agent for this purpose. Successful mechanized solution of routine problems has directed attention toward the capacity of the computer to arrive at apparent or real solutions of routine-learning and special problems. Increasing use of the computer in such problems is clearly necessary if our body of knowledge and information is to serve its ultimate function. Along with such use of the computer, however, will come restrictions and cautions which have not hitherto been necessary. We find that the computer is being given responsibilities with which it is less- able- to cope than man is. It is being called on to act for man in areas where man cannot define his own ability to perform and where he feels uneasy about his own performance- where he would like a neat, well-structured solution and feels that in adopting the machine's partial solution he is closer to the "right" than he is in using his own. An aura of respectability surrounds a computer output, and this, together with the time-balance factor, makes unqualified acceptance tempting. The need for caution, then, already exists and will be much greater in the future. It has little to do with the limited ability of the computer per se, much to do with the ability of man to realistically determine when and how he must use the tremendous ability which he has developed in automation. Let us continue to work with learning machines, with definitions of meaning and 'artificial intelligence.' Let us examine these processes as 'games' with expanding values, aiming toward developing improved computer techniques as well as increasing our knowledge of human functions. Until machines can satisfy the requirements discussed, until we can more perfectly determine the functions we require of the machines, let us not call upon mechanized decision systems to act upon human systems without intervening realistic human processing. As we proceed with the inevitable development of computers and means of using them, let us be sure that careful analysis is made of all automation (either routine-direct, routine-learning, or special) that is used in systems of whichman is a part-sure that man reflects upon his own reaction to, and use of mechanization. Let us be certain that, in response to Samuel Butler's question, "May not man himself become a sort of parasite upon the machines; an affectionate machine tickling aphid?' we will always be able to answer 'No.' "

View Map + Bookmark Entry

Ivan Sutherland Creates the First Graphical User Interface 1963

In 1963 Ivan Sutherland, a student at MIT's Lincoln Laboratory in Lexington, Massachusetts, working on the experimental TX- 2 computer, created the first graphical user interface, or first interactive graphics program, in his Ph.D. thesis, Sketchpad: A Man-Machine Graphical Communication System. 

Sketchpad was an early application of vector graphics.

View Map + Bookmark Entry

Foundation of Engelbart's Augmentation Research Center 1963

As a result of Engelbart's 1962 reportJ. C. R. Licklider, the first director of the US Defense Department's Advanced Research Project Agency (DARPA) Information Processing Techniques Office (IPTO), funded Douglas Engelbart's Augmentation Research Center at Stanford Research Institute in early 1963. The first experiments done there included trying to connect a display at SRI to the massive and unique AN/FSQ-32 computer at System Development Corporation in Santa Monica, California.

View Map + Bookmark Entry

Licklider Describes the "Intergalactic Computer Network" April 25, 1963

From his office at The Pentagon on April 25, 1963 J.C.R. Licklider, Director of Behavioral Sciences Command & Control Research at ARPA,  the U. S. Department of Defense Advanced Research Projects Agency, sent a memo to members and affiliates of what he jokingly called the "Intergalactic Computer Network, "outlining a key part of his strategy to connect all their individual computers and time-sharing systems into a single computer network spanning the continent” (Waldrop).

View Map + Bookmark Entry

Machine Perception of Three Dimensional Solids May 1963 – 1965

In May 1963 computer scientist Lawrence G. Roberts published Machine Perception of Three Dimensional Solids, MIT Lincoln Laboratory Report, TR 315, May 1963. This contained "the first algorithm to eliminate hidden or obscured surfaces from a perspective picture" (Carlson, A Critical History of Computer Graphics and Animation, accessed 05-30-2009).

In 1965, Roberts implemented a homogeneous coordinate scheme for transformations and perspective,  publishing Homogenous Matrix Representation and Manipulation of N-Dimensional Constructs, MIT MS-1505. Roberts's "solutions to these problems prompted attempts over the next decade to find faster algorithms for generating hidden surfaces" (Carlson, op. cit.).

View Map + Bookmark Entry

Touch-Tone Dialing is Introduced November 1963

In November 1963 touch-tone telephone dialing, developed at Bell Labs, was introduced, enabling calls to be switched digitally. The research leading to the design of the touch-tone keyboard was conducted by industrial psychologist John E. Karlin, head of Bell Labs’ Human Factors Engineering department, the first department of its kind at any American company.

"The rectangular design of the keypad, the shape of its buttons and the position of the numbers — with 1-2-3' on the top row instead of the bottom, as on a calculator — all sprang from empirical research conducted or overseen by Mr. Karlin.  

"The legacy of that research now extends far beyond the telephone: the keypad design Mr. Karlin shepherded into being has become the international standard on objects as diverse as A.T.M.’s, gas pumps, door locks, vending machines and medical equipment" (http://www.nytimes.com/2013/02/09/business/john-e-karlin-who-led-the-way-to-all-digit-dialing-dies-at-94.html, accessed 02-10-2013).

View Map + Bookmark Entry

Bitzer & Willson Invent the First Plasma Video Display (Neon Orange) 1964

In 1964 Donald Bitzer, H. Gene Slottow, and Robert Willson at the University of Illinois at Urbana-Champaign invented the first plasma video display for the PLATO Computer System.

The display was monochrome neon orange and incorporated both memory and bitmapped graphics. Built by Owens-Illinois glass, the flat panels were marketed under the name "Digivue."

View Map + Bookmark Entry

Woodrow Bledsoe Originates of Automated Facial Recognition 1964 – 1966

From 1964 to 1966 Woodrow W. "Bledsoe, along with Helen Chan and Charles Bisson of Panoramic Research, Palo Alto, California, researched programming computers to recognize human faces (Bledsoe 1966a, 1966b; Bledsoe and Chan 1965). Because the funding was provided by an unnamed intelligence agency, little of the work was published. Given a large database of images—in effect, a book of mug shots—and a photograph, the problem was to select from the database a small set of records such that one of the image records matched the photograph. The success of the program could be measured in terms of the ratio of the answer list to the number of records in the database. Bledsoe (1966a) described the following difficulties:

" 'This recognition problem is made difficult by the great variability in head rotation and tilt, lighting intensity and angle, facial expression, aging, etc. Some other attempts at facial recognition by machine have allowed for little or no variability in these quantities. Yet the method of correlation (or pattern matching) of unprocessed optical data, which is often used by some researchers, is certain to fail in cases where the variability is great. In particular, the correlation is very low between two pictures of the same person with two different head rotations.'

"This project was labeled man-machine because the human extracted the coordinates of a set of features from the photographs, which were then used by the computer for recognition. Using a GRAFACON, or RAND TABLET, the operator would extract the coordinates of features such as the center of pupils, the inside corner of eyes, the outside corner of eyes, point of widows peak, and so on. From these coordinates, a list of 20 distances, such as width of mouth and width of eyes, pupil to pupil, were computed. These operators could process about 40 pictures an hour. When building the database, the name of the person in the photograph was associated with the list of computed distances and stored in the computer. In the recognition phase, the set of distances was compared with the corresponding distance for each photograph, yielding a distance between the photograph and the database record. The closest records are returned.

"This brief description is an oversimplification that fails in general because it is unlikely that any two pictures would match in head rotation, lean, tilt, and scale (distance from the camera). Thus, each set of distances is normalized to represent the face in a frontal orientation. To accomplish this normalization, the program first tries to determine the tilt, the lean, and the rotation. Then, using these angles, the computer undoes the effect of these transformations on the computed distances. To compute these angles, the computer must know the three-dimensional geometry of the head. Because the actual heads were unavailable, Bledsoe (1964) used a standard head derived from measurements on seven heads.

"After Bledsoe left PRI [Panoramic Research, Inc.] in 1966, this work was continued at the Stanford Research Institute, primarily by Peter Hart. In experiments performed on a database of over 2000 photographs, the computer consistently outperformed humans when presented with the same recognition tasks (Bledsoe 1968). Peter Hart (1996) enthusiastically recalled the project with the exclamation, 'It really worked!' " (Faculty Council, University of Texas at Austin, In Memoriam Woodrow W. Bledsoe, accessed 05-15-2009).

Bledsoe, W. W. 1964. The Model Method in Facial Recognition, Technical Report PRI 15, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W., and Chan, H. 1965. A Man-Machine Facial Recognition System-Some Preliminary Results, Technical Report PRI 19A, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W. 1966a. Man-Machine Facial Recognition: Report on a Large-Scale Experiment, Technical Report PRI 22, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W. 1966b. Some Results on Multicategory Patten Recognition. Journal of the Association for Computing Machinery 13(2):304-316.

Bledsoe, W. W. 1968. Semiautomatic Facial Recognition, Technical Report SRI Project 6693, Stanford Research Institute, Menlo Park, California.

View Map + Bookmark Entry

Joseph Weizenbaum Writes ELIZA: A Pioneering Experiment in Artificial Intelligence Programming 1964 – 1966

Between 1964 and 1966 German and American computer scientist Joseph Weizenbaum at MIT wrote the computer program ELIZA. This program, named after the ingenue in George Bernard Shaw's play Pygmalion, was an early example of primitive natural language processing. The program operated by processing users' responses to scripts, the most famous of which was DOCTOR, which was capable of engaging humans in a conversation which bore a striking resemblance to one with an empathic psychologist. Weizenbaum modeled its conversational style after Carl Rogers, who introduced the use of open-ended questions to encourage patients to communicate more effectively with therapists. The program applied pattern matching rules to statements to figure out its replies. Using almost no information about human thought or emotion, DOCTOR sometimes provided a startlingly human-like interaction.

"When the "patient" exceeded the very small knowledge base, DOCTOR might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?" A possible response to "My mother hates me" would be "Who else in your family hates you?" ELIZA was implemented using simple pattern matching techniques, but was taken seriously by several of its users, even after Weizenbaum explained to them how it worked. It was one of the first chatterbots in existence" (Wikipedia article on ELIZA, accessed 06-15-2014).

"Weizenbaum was shocked that his program was taken seriously by many users, who would open their hearts to it. He started to think philosophically about the implications of artificial intelligence and later became one of its leading critics.

"His influential 1976 book Computer Power and Human Reason displays his ambivalence towards computer technology and lays out his case: while Artificial Intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. Choice, however, is the product of judgment, not calculation. It is the capacity to choose that ultimately makes us human. Comprehensive human judgment is able to include non-mathematical factors, such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for comparison" (Wikipedia article on Joseph Weizenbaum, accessed 06-15-2014).

View Map + Bookmark Entry

The Rand Tablet: One of the Earliest Tablet Computers and the First Reference to Electronic Ink August 1964

In August 1964 M. R. Davis and T. O. Ellis of The Rand Corporation, Santa Monica, California, published The RAND Tablet: A Machine Graphical Communication DeviceThey indicated that the device had been in use since 1963.

"The RAND table is believed to be the first such graphic device that is digital, is relatively low-cost, possesses excellent linearity, and is able to uniquely describe 10 [to the 6th power] locations in the 10" x 10" active table area. . . . the tablet has great potential no only in such applications as digitizing map information, but also as a working tool in the study of more esoteric applications of graphical languages for man-machine interaction. . . . " (p.iv)

"The RAND tablet device generates 10-bit x and 10-bit y stylus position information. It is connected to an input channel of a general-purpose computer and also to an oscilloscope display. The display control multiplexes the stylus position information with computer-generated information in such a way that the oscilloscope display contains a composite of the current pen position (represented as a dot) and the computer output. In addition, the computer may regenerate meaningful track history on the CRT, so that while the user is writing, it appears that the pen has "ink." This displayed "ink" is visualized from the oscilloscope display while hand-directing the stylus position on the tablet. users normally adjust within a few minutes to the conceptual superposition of the displayed ink and the actual off-screen pen movement. There is no apparent loss of ease or speed in writing, printing, constructing arbitrary figures, or even in penning one's signature" (pp. 2-3).

J. W. Ward, History of Pen Computing: Annotated Bibliography in On-line Character Recognition and Pen Computing: http://rwservices.no-ip.info:81/pens/biblio70.html#DavisMR64 , accessed 12-30-2009).

View Map + Bookmark Entry

Licklider Issues "Libraries of the Future" 1965

In 1965 J.C.R. Licklider, Director of Project MAC (Machine-Aided Cognition and Multiple-Access Computers) at MIT and Professor of Electrical Engineering at MIT, published Libraries of the Future, a study of what libraries might be at the end of the twentieth century. Licklider's book reviewed systems for information storage, organization, and retrieval, use of computers in libraries, and library question-answering systems. In his discussion he was probably the first to raise general questions concerning the transition of the book from exclusively printing on paper to electronic form.

View Map + Bookmark Entry

Ted Nelson Coins the Terms Hypertext, Hypermedia, and Hyperlink 1965

In 1965 self-styled "systems humanist" Ted Nelson (Theodor Holm Nelson) published "Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate," ACM '65 Proceedings of the 1965 20th national conference, 84-100In this paper Nelson coined the terms hypertext and hypermedia to refer to features of a computerized information system. He used the word "link" to refer the logical connections that came to be associated with the word "hyperlink."  

Nelson is also credited with inventing the word hyperlink, though its published origin is less specific:

"The term "hyperlink" was coined in 1965 (or possibly 1964) by Ted Nelson and his assistant Calvin Curtin at the start of Project Xanadu. Nelson had been inspired by "As We May Think", a popular essay by Vannevar Bush. In the essay, Bush described a microfilm-based machine (the Memex) in which one could link any two pages of information into a "trail" of related information, and then scroll back and forth among pages in a trail as if they were on a single microfilm reel. The closest contemporary analogy would be to build a list of bookmarks to topically related Web pages and then allow the user to scroll forward and backward through the list.

In a series of books and articles published from 1964 through 1980, Nelson transposed Bush's concept of automated cross-referencing into the computer context, made it applicable to specific text strings rather than whole pages, generalized it from a local desk-sized machine to a theoretical worldwide computer network, and advocated the creation of such a network. Meanwhile, working independently, a team led by Douglas Engelbart (with Jeff Rulifson as chief programmer) was the first to implement the hyperlink concept for scrolling within a single document (1966), and soon after for connecting between paragraphs within separate documents (1968)" (Wikipedia article on Hyperlink, accessed 08-29-2010). 

Wardrip-Fruin and Montfort, The New Media Reader (2003) 133-45.

View Map + Bookmark Entry

The TUTOR Programming Language for Education and Games 1965 – 1969

In 1965 Paul Tenczar developed the TUTOR programming language for use in developing electronic learning programs called "lessons" for the PLATO system at the University of Illinois at Urbana-Champaign. It has "powerful answer-parsing and answer-judging commands, graphics and features to stimulate handling student records and statistics by instructors." This also made it suitable for the creation of many non-educational lessons— that is, games—including flight simulators, war games, role-playing, such as Dungeons and Dragons (dnd), card games, word games, and Medical lesson games.

The first documentation of the TUTOR language, under this name, appears to be The TUTOR Manual, CERL Report X-4, by R. A. Avner and P. Tenczar, January 1969.

View Map + Bookmark Entry

Cyrus Levinthal Builds the First System for Interactive Display of Molecular Structures 1966

In 1966, using the Project MAC, an early time-sharing system at MIT, Cyrus Levinthal built the first system for the interactive display of molecular structures

"This program allowed the study of short-range interaction between atoms and the "online manipulation" of molecular structures. The display terminal (nicknamed Kluge) was a monochrome oscilloscope (figures 1 and 2), showing the structures in wireframe fashion (figures 3 and 4). Three-dimensional effect was achieved by having the structure rotate constantly on the screen. To compensate for any ambiguity as to the actual sense of the rotation, the rate of rotation could be controlled by globe-shaped device on which the user rested his/her hand (an ancestor of today's trackball). Technical details of this system were published in 1968 (Levinthal et al.). What could be the full potential of such a set-up was not completely settled at the time, but there was no doubt that it was paving the way for the future. Thus, this is the conclusion of Cyrus Levinthal's description of the system in Scientific American (p. 52):

It is too early to evaluate the usefulness of the man-computer combination in solving real problems of molecular biology. It does seems likely, however, that only with this combination can the investigator use his "chemical insight" in an effective way. We already know that we can use the computer to build and display models of large molecules and that this procedure can be very useful in helping us to understand how such molecules function. But it may still be a few years before we have learned just how useful it is for the investigator to be able to interact with the computer while the molecular model is being constructed.

"Shortly before his death in 1990, Cyrus Levinthal penned a short biographical account of his early work in molecular graphics. The text of this account can be found here."

In January 2014 two short films produced with the interactive molecular graphics and modeling system devised by Cyrus Levinthal and his collaborators in the mid-1960s was available at this link.

View Map + Bookmark Entry

Ted Nelson & Andries van Dam Develop the First Hypertext Editing System 1967

In 1967 Ted Nelson (Theodor Holm Nelson), Andries van Dam, and students at Brown University collaborated on the first hypertext editing system (HES) based on Nelson's concept of hypertext.

"HES organized data into two main types: links and branching text. The branching text could automatically be arranged into menus and a point within a given area could also have an assigned name, called a label, and be accessed later by that name from the screen. Although HES pioneered many modern hypertext concepts, its emphasis was on text formatting and printing.

"HES ran on an IBM System/360/50 mainframe computer, which was inefficient for the processing power required by the system. The program was used by NASA's Houston Manned Spacecraft Center for documentation on the Apollo space program. The project's research was funded by IBM but the program was stopped around 1969, and replaced by the FRESS (File Retrieval and Editing System) project" (Wikipedia article on Hypertext Editing System, accessed 11-08-2013).

View Map + Bookmark Entry

Steven A. Coons Develops the "Coons Patch" in Computer Graphics June 1967

In June 1967 Steven A. Coons, professor of mechanical engineering and researcher in interactive computer graphics at MIT's Electronic Systems Laboratory, published Surfaces for Computer-aided Design of Space Forms, Project MAC Report MAC-TR-41, MIT.

Known as the "The Little Red Book,

" the paper described what became known as the "Coons Patch"— "a formulation that presented the notation, mathematical foundation, and intuitive interpretation of an idea that would ultimately become the foundation for surface descriptions that are commonly used today, such as b-spline surfaces, NURB surfaces, etc. His technique for describing a surface was to construct it out of collections of adjacent patches, which had continuity constraints that would allow surfaces to have curvature which was expected by the designer. Each patch was defined by four boundary curves, and a set of "blending functions" that defined how the interior was constructed out of interpolated values of the boundaries" (Carlson, A Critical History of Computer Graphics and Animation, accessed 05-30-2009).

View Map + Bookmark Entry

Douglas Engelbart Invents the Computer Mouse June 27, 1967 – November 17, 1970

On June 27, 1967 electrical engineer and inventor Douglas C. Engelbart of the Augmentation Research Center at SRI filed a patent for an X-Y Position Indicator for a Display System. The device was covered on patent 3,541,541 granted on November 17, 1970. It eventually became known as the Mouse.

View Map + Bookmark Entry

Ivan Sutherland and Bob Sproull Create the First Virtual Reality Head Mounted Display System 1968

In 1968 Ivan Sutherland at the University of Utah, with the help of his student Bob Sproull, created the first Virtual Reality (VR) and Augmented Reality (AR) head mounted display system.

Sutherland's head mounted display was so heavy that it had to be suspended from the ceiling, and the formidable appearance of the device inspired its name—the Sword of Damocles. The system was primitive both in terms of user interface and realism, and the graphics comprising the virtual environment were simple wireframe rooms.

View Map + Bookmark Entry

Evans & Sutherland Commercialize the Use of Computers as Simulators 1968

In 1968 Ivan Sutherland and David Evans, both professors at the University of Utah, founded Evans & Sutherland to commercialize the use of computers as simulators for training purposes.

View Map + Bookmark Entry

Stanley Kubrick & Arthur C. Clarke Create "2001: A Space Odyssey" 1968

In 1968 the film 2001: A Space Odyssey, written by American film director Stanley Kubrick in collaboration with science fiction writer and futurist Arthur C. Clarke, captured imaginations with the idea of a computer that could see, speak, hear, and “think.” 

Perhaps the star of the film was the HAL 9000 computer. "HAL (Heuristically programmed ALgorithmic Computer) is an artificial intelligence, the sentient on-board computer of the spaceship Discovery. HAL is usually represented only as his television camera "eyes" that can be seen throughout the Discovery spaceship.... HAL is depicted as being capable not only of speech recognition, facial recognition, and natural language processing, but also lip reading, art appreciation, interpreting emotions, expressing emotions, reasoning, and chess, in addition to maintaining all systems on an interplanetary voyage.

"HAL is never visualized as a single entity. He is, however, portrayed with a soft voice and a conversational manner. This is in contrast to the human astronauts, who speak in terse monotone, as do all other actors in the film" (Wikipedia article on HAL 9000, accessed 05-24-2009).

"Kubrick and Clarke had met in New York City in 1964 to discuss the possibility of a collaborative film project. As the idea developed, it was decided that the story for the film was to be loosely based on Clarke's short story "The Sentinel", written in 1948 as an entry in a BBC short story competition. Originally, Clarke was going to write the screenplay for the film, but Kubrick suggested during one of their brainstorming meetings that before beginning on the actual script, they should let their imaginations soar free by writing a novel first, which the film would be based on upon its completion. 'This is more or less the way it worked out, though toward the end, novel and screenplay were being written simultaneously, with feedback in both directions. Thus I rewrote some sections after seeing the movie rushes -- a rather expensive method of literary creation, which few other authors can have enjoyed.' The novel ended up being published a few months after the release of the movie" (Wikipedia article on Arthur C. Clarke, accessed 05-24-2009).

View Map + Bookmark Entry

The Computer Artis Society, the First Society for Computer Art, is Founded in London 1968

In the months following the ground breaking London exhibition, Cybernetic Serendipity, that showcased computer-based and technologically influenced works in graphics, music, film, and interactivity, Alan SutcliffeGeorge Mallen, and John Lansdown founded the Computer Arts Society in London. The Society enabled relatively isolated artists working with computers in a variety of fields to meet and exchange information. It also ran practical courses, conferences and exhibitions.

"In March 1969, CAS organised an exhibition entitled Event One, which was held at the Royal College of Art. The exhibition showcased innovative work with computers across a broad range of disciplines, including sculpture, graphics, music, film, architecture, poetry, theatre and dance. CAS founder John Lansdown, for example, designed and organised a dance performance that was choreographed entirely by the computer and performed by members of the Royal Ballet School. The multi-media approach of exhibitions such as Event One greatly influenced younger artists and designers emerging at this time. Many of these artists were rebelling against the traditional fine art hierarchies of the time, and went on to work in the new fields of computer, digital, and video art as a result.

"CAS established links with educational establishments, journalists and industry, ensuring greater coverage of their activities and more importantly helping to provide access to computing technology at a time when this was difficult. CAS members were remarkably ahead of their time in recognising the long term impact that the computer would have on society, and in providing services to those already working creatively with the computer. By 1970 CAS had 377 members in 17 countries. Its journal 'PAGE' was first edited by auto-destructive artist Gustav Metzger, and is still being produced today. The Computer Arts Society is a specialist group of the British Computer Society" (http://www.vam.ac.uk/content/articles/t/v-and-a-computer-art-collections/, accessed 01-19-2014).

In January 2014 all of the early issues of Page, beginning with "Page 1," April 1969 were available from the website of the Computer Arts Society Specialty Group of the BCS at this link.

In 2007 the Computer Arts Society donated its collection of original computer art to the Victoria and Albert Museum in London, which maintains one of the world's largest and most significant collections of computer art. The V&A's holdings in this field were the subject of an article by Honro Beddard entitled "Computer Art at the V&A," V&A Online Journal, Issue No. 2 (2009), accessed 01-19-2014). 

View Map + Bookmark Entry

Licklider & Taylor Describe Features of the Future ARPANET; Description of a Computerized Personal Assistant April 1968

In 1968 American psychologist and computer scientist J.C.R. Licklider of MIT and Robert W. Taylor, then director of ARPA's Information Processing Techniques Office, published "The Computer as a Communication Device," Science and Technology, April 1968. In this paper, extensively illustrated with whimsical cartoons, they described features of the future ARPANET and other aspects of anticipated human-computer interaction.

Honoring the artificial intelligence pioneer Oliver Selfridge, on pp. 38-39 of the paper they proposed a device they referred to as OLIVER (On-Line Interactive Vicarious Expediter and Responder). OLIVER was one of the clearest early descriptions of a computerized personal assistant:

"A very important part of each man's interaction with his on-line community will be mediated by his OLIVER. The acronym OLIVER honors Oliver Selfridge, originator of the concept. An OLIVER is, or will be when there is one, an 'on-line interactive vicarious expediter and responder,' a complex of computer programs and data that resides within the network and acts on behalf of its principal, taking care of many minor matters that do not require his personal attention and buffering him from the demanding world. 'You are describing a secretary,' you will say. But no! secretaries will have OLIVERS.

"At your command, your OLIVER will take notes (or refrain from taking notes) on what you do, what you read, what you buy and where you buy it. It will know who your friends are, your mere acquiantances. It will know your value structure, who is prestigious in your eyes, for whom you will do with what priority, and who can have access to which of your personal files. It will know your organizations's rules pertaining to proprietary information and the government's rules relating to security classification.

"Some parts of your OLIVER program will be common with parts of ther people's OLIVERS; other parts will be custom-made for you, or by you, or will have developed idiosyncracies through 'learning based on its experience at your service."

View Map + Bookmark Entry

Douglas Engelbart Demonstrates Hypertext, Text Editing, Windows, Email and a Mouse: "The Mother of All Demos" December 9, 1968

On December 8, 1968 Douglas Engelbart of the Stanford Research Institute, Menlo Park, California, presented a 100 minute demonstration  at the San Francisco Convention Center of an “oNLine System” (NLS), the features of which included hypertext, text editing, screen windowing, and email. To make this system operate, Engelbart used the mouse which he had invented the previous year.

In December 2013 numerous still images, a complete video stream of the demo, and 35 brief flash streaming video clips of different segments, were available from the Engelbart Collection at Stanford University at this link

View Map + Bookmark Entry

The First ATM is Installed at Chemical Bank in New York Circa 1969 – 1970

In 1969 or 1970 the first automatic teller machine (ATM) was installed. Dates conflict as to whether this was in 1969 or slightly later. The first machine installed at Chemical Bank in New York may have been only a cash dispenser.

View Map + Bookmark Entry

1970 – 1980

Xerox PARC is Founded 1970

In 1970 Xerox opened the Palo Alto Research Center (PARC). PARC became the incubator of the Graphical User Interface (GUI), the mouse, the WYSIWYG text editor, the laser printer, the desktop computer, the Smalltalk programming language and integrated development environment, Interpress (a resolution-independent graphical page description language and the precursor to PostScript), and Ethernet.

View Map + Bookmark Entry

Negroponte's "The Architecture Machine" is Published 1970

In his book, The Architecture Machine, published in 1970 architect and computer scientist Nicholas Negroponte of MIT described early research on computer-aided design, and in so doing covered early work on human-computer interaction, artificial intelligence, and computer graphics. The book contained a large number of illustrations.

"Most of the machines that I will be discussing do not exist at this time. The chapters are primarily extrapolations into the future derived from experiences with various computer-aided design systems. . . .

"There are three possible ways in which machines can assist the design process: (1) current procedures can be automated, thus speeding up and reducing the cost of existing practices; (2) existing methods can be altered to fit within the specifications and constitution of a machine, where only those issues are considered that are supposedly machine-compatible; (3) the design process, considered as evolutionary, can be presented to a machine, also considered as evolutionary, and a mutal training, resilience, and growth can be developed" (From Negroponte's "Preface to a Preface," p. [6]).

Negroponte's book has been called the first book on the personal computer. On that I do not agree. The book contains only vague discussions of the possiblity of eventual personal computers. Most specifically it says, as caption to its second illustration, a cartoon relating to a home computer, "The computer at home is not a fanciful concept. As the cost of computation lowers, the computer utility will become a consumer item, and every child should have one." Instead The Architecture Machine may be the first book on human-computer interaction, and on the possibilities of computer-aided design.

(This entry was last revised on 04-20-2014.)

View Map + Bookmark Entry

IBM Performs the First Test of Magnetic Stripe Transaction Card Technology January 1970 – May 1973

The first test of magnetic stripe transaction card technology developed by IBM occurred in January 1970 at the American Airlines terminal at Chicago's O'Hare Airport with the Automatic Ticket Vendor.

Reference: Computer History Museum, Jerome Svigals donation, "Automatic Ticket Vendor Press Kit", October 30, 1969. X3951.2007.

Though the test at O'Hare Airport was successful, the airline did not implement the technology because of a recession. IBM patented the technology, but did not announce its availability until 1973.

View Map + Bookmark Entry

IBM Introduces Speech Recognition Technology 1971

IBM’s first operational application of speech recognition enabled customer engineers servicing equipment to “talk” to and receive “spoken” answers from a computer that could recognize about 5,000 words.

View Map + Bookmark Entry

One of the First Touchscreens Appears on the Plato IV System 1972

In 1972 one of the first touchscreens in a working computer application was in the terminal of the Plato IV system at the University of Illinois.

"In 1972 a new system named PLATO IV was ready for operation. The PLATO IV terminal was a major innovation. It included Bitzer's orange plasma display invention which incorporated both memory and bitmapped graphics into one display. This plasma display included fast vector line drawing capability and ran at 1260 baud, rendering 60 lines or 180 characters per second. The display was a 512x512 bitmap, with both character and vector plotting done by hardwired logic. Users could provide their own characters to support rudimentary bitmap graphics. Compressed air powered a piston-driven microfiche image selector that permitted colored images to be projected on the back of the screen under program control. The PLATO IV display also included a 16-by-16 grid infrared touch panel allowing students to answer questions by touching anywhere on the screen" (Wikipedia article on Plato (computer system), accessed 12-30-2009).

View Map + Bookmark Entry

SPACEWAR: Fanatic Life and Symbolic Death Among the Computer Bums December 7, 1972

On December 7, 1972 Stewart Brand published "SPACEWAR: Fanatic Life and Symbolic Death Among the Computer Bums" in Rolling Stone magazine.

"The first 'Intergalactic Spacewar Olympics' will be held here, Wednesday 19 October, 2000 hours. First prize will be a year's subscription to 'Rolling Stone'. The gala event will be reported by Stone Sports reporter Stewart Brand & photographed by Annie Liebowitz. Free Beer!

"Ready or not, computers are coming to the people.  

"That’s good news, maybe the best since psychedelics. It’s way off the track of the “Computers — Threat or menace?” school of liberal criticism but surprisingly in line with the romantic fantasies of the forefathers of the science such as Norbert Wiener, Warren McCulloch, J.C.R. Licklider, John von Neumann and Vannevar Bush. The trend owes its health to an odd array of influences: The youthful fervor and firm dis-Establishmentarianism of the freaks who design computer science; an astonishingly enlightened research program from the very top of the Defense Department; an unexpected market-Banking movement by the manufacturers of small calculating machines, and an irrepressible midnight phenomenon known as Spacewar.

"Reliably, at any nighttime moment (i.e. non-business hours) in North America hundreds of computer technicians are effectively out of their bodies, locked in life-or-Death space combat computer-projected onto cathode ray tube display screens, for hours at a time, ruining their eyes, numbing their fingers in frenzied mashing of control buttons, joyously slaying their friend and wasting their employers' valuable computer time. Something basic is going on.  

"Rudimentary Spacewar consists of two humans, two sets of control buttons or joysticks, one TV-like display and one computer. Two spaceships are displayed in motion on the screen, controllable for thrust, yaw, pitch and the firing of torpedoes. Whenever a spaceship and torpedo meet, they disappear in an attractive explosion. That’s the original version invented in 1962 at MIT by Steve Russell. (More on him in a moment.)  

"October, 1972, 8 PM, at Stanford’s Artificial Intelligence (AI) Laboratory, moonlit and remote in the foothills above Palo Alto, California. Two dozen of us are jammed in a semi-dark console room just off the main hall containing AI’s PDP-10 computer. AI’s Head System Programmer and most avid Spacewar nut, Ralph Gorin, faces a display screen which says only:  


(http://downlode.org/Etext/Spacewar/, accessed 02-25-2010).

View Map + Bookmark Entry

The Xerox Alto: Conceptually, the First Personal Computer System 1973

In 1973 the Alto computer system was operational at Xerox PARC. Conceptually the first personal computer system, the Alto eventually featured the first WYSYWG (What You See is What You Get) editor, a graphic user interface (GUI), networking through Ethernet, and a mouse. The system was priced $32,000.

View Map + Bookmark Entry

The Plato IV System, Probably the World's First Online Community 1973

Probably the world's first online community began to emerge in 1973 through online forums, and the message board called PLATO Notes developed by David R. Woolley, in the PLATO IV system evolving at the University of Illinois at Urbana-Champaign.

View Map + Bookmark Entry

The Brain-Computer Interface 1973

In 1973 computer scientist Jacques J. Vidal of UCLA coined the term brain-computer interface (BCI) in his paper "Toward Direct Brain-Computer Communication," Annual Review of Biophysics and Bioengineering 2: 157–80. doi:10.1146/annurev.bb.02.060173.001105. PMID 4583653.

View Map + Bookmark Entry

The First Computer Role-Playing Game: Dungeons & Dragons 1974 – 1975

From 1974 to 1975 Gary Whisenhunt and Ray Wood at Southern Illinois University, Carbondale, wrote the first computer role-playing game in the TUTOR programming language for the PLATO system. It was called Dungeons & Dragons (dnd).

The name "dnd" was derived from the abbreviation "DND" (D&D) from the original tabletop role-playing game Dungeons & Dragons, first released in 1974. The publication of D&D is widely regarded as the beginning of modern role-playing games and of the role-playing game industry.

View Map + Bookmark Entry

Ted Nelson Publishes a Manifesto of the Microcomputer Revolution 1974

In 1974 Ted Nelson (Theodor Holm Nelson) self-published from South Bend, Indiana, the book, Computer Lib / Dream Machines, sub-titled You can and must understand computers NOW. Nelson issued this together with: Dream Machines: New freedoms through computer screens—a minority report. In his book Tools for Thought: The History and Future  of Mind-Expanding Technology Howard Rheingold called Computer Lib "the best-selling underground manifesto of the microcomputer revolution."

in 1987 Microsoft Press reissued Nelson's book with an introduction by Stewart Brand, of the Whole Earth Catalog

"Both the 1974 and 1987 editions have a highly unconventional layout, with two front covers (one for Computer Lib and the other for Dream Machines) and the division between the two books marked by text (for the other side) rotated 180°. The text itself is broken up into many sections, with simulated pull-quotes, comics, side bars, etc., similar to a magazine layout" (Wikipedia article on Computer Lib /Dream Machines, accessed 03-08-2012).

View Map + Bookmark Entry

Myron Krueger's Videoplace Pioneers "Artificial Reality" Circa 1975

In the 1970s American computer artist Myron W. Krueger, working at the University of Wisconsin-Madison and the University of Connecticut, Storrs, Mansfield, Connecticut, developed Videoplace, which allowed users to interact with virtual objects for the first time. It created an artificial reality that surrounded its users, and responded to their movements and actions, without being encumbered by the use of goggles or gloves.

"The Videoplace used projectors, video cameras, special purpose hardware, and onscreen silhouettes of the users to place the users within an interactive environment. Users in separate rooms in the lab were able to interact with one another through this technology. The movements of the users recorded on video were analyzed and transferred to the silhouette representations of the users in the Artificial Reality environment. By the users being able to visually see the results of their actions on screen, through the use of the crude but effective colored silhouettes, the users had a sense of presence while interacting with onscreen objects and other users even though there was no direct tactile feedback available. The sense of presence was enough that users pulled away when their silhouettes intersected with those of other users.  The Videoplace is now on permanent display at the State Museum of Natural History located at the University of Connecticut" (Wikipedia article on Videoplace, accessed 10-22-2014).

In 1983 Krueger published a book entitled Artificial Reality, updated in a second edition in 1991. This work is one of the pioneering treatises on virtual or artificial reality.

View Map + Bookmark Entry

Apple I: The First Personal Computer Sold as a Fully Assembled Product 1977

In 1977 Apple Computer introduced the Apple II, the first personal computer sold as a fully assembled product, and the first with color graphics. When the first spreadsheet program, Visicalc, was introduced for the Apple II in 1979 it greatly stimulated sales of the computer as people bought the Apple II just to run Visicalc.

View Map + Bookmark Entry

The Sayre Glove 1977

In 1977 Daniel J. Sandin and Thomas Defanti at the Electronic Visualization Laboratory, a cross-disciplinary research lab at the University of Illinois at Chicago, created the Sayre Glove, the first wired glove or data glove. The glove was based on an idea of a colleague at the laboratory, Richard Sayre.  An inexpensive, lightweight glove to monitor hand movements, the Sayre Glove provided an effective method for multidimensional control, such as mimicking a set of sliders.

"This device used light based sensors with flexible tubes with a light source at one end and a photocell at the other. As the fingers were bent, the amount of light that hit the photocells varied, thus providing a measure of finger flexion. It was mainly used to manipulate sliders, but was lightweight and inexpensive" (Wikipedia article on Daniel J. Sandin, accessed 10-03-2013).

This may the beginning of gesture recognition research in computer science. 

View Map + Bookmark Entry

Early Interactive Computing and Virtual Reality 1978 – 1979

The term hypermedia is used as a logical extension of the term hypertext in which graphics, audio, video, plain text and hyperlinks intertwine to create a medium of information that is generally unlinear. Funded by ARPA, The Aspen Movie Map  was an early hypermedia project produced in 1978-79 by the Architecture Machine Group (ARC MAC) at MIT under the direction of Andrew Lippman. It allowed the user to take a virtual tour through the city of Aspen, Colorado

"ARPA funding during the late 1970s was subject to the military application requirements of the notorious Mansfield Amendment introduced by Mike Mansfield (which had severely limited funding for hypertext researchers like Douglas Engelbart).

"The Aspen Movie Map's military application was to solve the problem of quickly familiarizing soldiers with new territory. The Department of Defense had been deeply impressed by the success of Operation Entebbe in 1976, where the Israeli commandos had quickly built a crude replica of the airport and practiced in it before attacking the real thing. DOD hoped that the Movie Map would show the way to a future where computers could instantly create a three-dimensional simulation of a hostile environment at much lower cost and in less time (see virtual reality).

"While the Movie Map has been referred to as an early example of interactive video, it is perhaps more accurate to describe it as a pioneering example of interactive computing. Video, audio, still images, and metadata were retrieved from a database and assembled on the fly by the computer (an Interdata minicomputer running the MagicSix operating system) redirecting its actions based upon user input; video was the principle, but not sole affordance of the interaction" (Wikipedia article on Aspen Movie Map, accessed 04-16-2009).

View Map + Bookmark Entry

1980 – 1990

The Xerox Star: The "Office of the Future" 1981

In 1981 Xerox introduced the 8010 Star Information System, the first commercial system to incorporate a bitmapped display, a windows-based graphical user interface, icons, folders, mouse, Ethernet networking, file servers, printer servers and e-mail.

Xerox's 8010 Star was developed at Xerox's Systems Development Department (SDD) in El Segundo, California. A section of SDD ("SDD North") was located in Palo Alto, California, and included some people borrowed from Xerox's PARC. SDD's mission was to design the "Office of the Future"—a system, easy to use, that would incorporate the best features of the Xerox Alto, and could automate many office tasks.

View Map + Bookmark Entry

"Blade Runner" 1982

The 1982 science fiction film Blade Runnerstarring Harrison Ford and directed by Ridley Scott, loosely based on the novel Do Androids Dream of Electric Sheep? by Philip K. Dick, depicted a dreary, rainy, and polluted Los Angeles in 2019. In the film genetically manufactured, bioengineered biorobots called replicants—visually indistinguishable from adult humans—are used for dangerous and degrading work in Earth's "off-world colonies."  After a minor replicant uprising, replicants are banned on Earth; and specialist police units called "blade runners" are trained to hunt down and "retire" (kill) escaped replicants on Earth.

The film, which  became a cult classic for many reasons, including its unique sets, lighting, costumes and visual effects, is considered the last great science fiction film in which the special effects were produced entirely through analog, rather than digital or computer graphics methods, using elaborate model-making, multiple exposures, etc.

Scott's original director's cut of the film was first issued as a DVD in 1999. In 2007 the so-called "Final Cut" with a great deal of supplementary material, including three previous versions of the film, and a "definitive" documentary, even longer than the original film, was issued on DVD and Blu-ray. The documentary, and the collection of versions of the film, presented a superb opportunity to gain insight into way that Ridley Scott created a film.

View Map + Bookmark Entry

The DataGlove, a Hand Gesture Interface Device 1982 – 1989

In 1982 Thomas G. Zimmerman of Redwood City, California filed a patent (US Patent 4542291) on an optical flex sensor mounted in a glove to measure finger bending. Continuing this research, Zimmerman worked with Jaron Lanier to incorporate ultrasonic and magnetic hand position tracking technology to create the Power Glove and the DataGlove, respectively (US Patent 4988981, filed 1989). The optical flex sensor used in the DataGlove was invented by Young L. Harvill (US Patent 5097252, filed 1989) who scratched the fiber near the finger joint to make it locally sensitive to bending. 

The DataGlove is considered one of the first commercially available wired gloves. The first wired glove available to home users in 1989 was the Nintendo Power Glove designed as a gaming glove for the Nintendo Entertainment System. It had a crude tracker and finger bend sensors, plus buttons on the back. The sensors in the Power Glove were also used by hobbyists to create their own datagloves.  Both the DataGlove and the Power Glove were based on Zimmerman's original instrumented glove or wired glove.

Zimmerman, Lanier et al."A Hand Gesture Interface Device" (1987).

View Map + Bookmark Entry

One of the First Commercially Available Touchscreen Computers November 1983

In 1983 Hewlett-Packard, Palo Alto, California, introduced the HP-150, one of the earliest commercially available touchscreen computers. 

"The screen is not a touch screen in the strict sense, but a 9" Sony CRT surrounded by infrared emitters and detectors which detect the position of any non-transparent object on the screen. In the original HP-150, these emitters & detectors were placed within small holes located in the inside of the monitor's bezel (which resulted in the bottom series of holes sometimes filling with dust and causing the touch screen to fail; until the dust was vacuumed from the holes)" (Wikipedia article on HP-150, accessed 12-30-2009).

View Map + Bookmark Entry

The Greatest PC Keyboard of All Time? 1984 – 2008

In 1984 IBM introduced the model M keyboard, considered by PC World in July 2008 to be the "greatest keyboard of all time." The PC World article contained a remarkable series of images showing how the keyboard was engineered with captions describing its many virtues.

View Map + Bookmark Entry

Steve Jobs Introduces the "Mac" January 24, 1984

On January 24, 1984 Apple Computer introduced the Macintosh (Mac), with a graphical user interface (GUI) based on the Xerox Star system.

View Map + Bookmark Entry

Kasparov Defeats 32 Different Chess Computers 1985

"In 1985, in Hamburg, I played against thirty-two different chess computers at the same time in what is known as a simultaneous exhibition. I walked from one machine to the next, making my moves over a period of more than five hours. The four leading chess computer manufacturers had sent their top models, including eight named after me from the electronics firm Saitek.  

"It illustrates the state of computer chess at the time that it didn't come as much of a surprise when I achieved a perfect 32–0 score, winning every game, although there was an uncomfortable moment. At one point I realized that I was drifting into trouble in a game against one of the "Kasparov" brand models. If this machine scored a win or even a draw, people would be quick to say that I had thrown the game to get PR for the company, so I had to intensify my efforts. Eventually I found a way to trick the machine with a sacrifice it should have refused. From the human perspective, or at least from my perspective, those were the good old days of man vs. machine chess" (Gary Kasparov, "The Chess Master and the Computer," The New York Review of Books 57 February 11, 2010.

View Map + Bookmark Entry

The First Commercially Available Tablet Computer September 1989

In 1989 GRiD Systems, a subsidiary of Tandy Corporation, Fort Worth, Texas, introduced the first commercially available tablet computer: the GRiDPad, which used an operating system based on MS-DOS.

View Map + Bookmark Entry

1990 – 2000

The First "Search Engine" but Not a "Web Search Engine" 1990

In 1990 Alan Emtage, Bill Heelan, and Peter J. Deutsch—students at McGill University, Montreal, Canada—wrote ARCHIE, a program designed to index FTP archives.  ARCHIE was the first search engine,” as distinct from a “web search engine.”

View Map + Bookmark Entry

Development of Neural Networks 1993

In 1993 Psychologist, neuroscientist and cognitive scientist James A. Anderson of Brown University, Providence, RI, published "The BSB Model: A simple non-linear autoassociative network," M. Hassoun (Ed), Associative Neural Memories: Theory and Implementation (1993).  Anderson's neural networks were applied to models of human concept formation, decision making, speech perception, and models of vision.

Anderson, J. A., Spoehr, K. T. and Bennett, D.J.  "A study in numerical perversity: Teaching arithmetic to a neural network,"  D.S. Levine and M. Aparicio (Eds.) Neural Networks for Knowledge Representation and Inference, (1994).

View Map + Bookmark Entry

The Singularity January 1993

Mathematician, computer scientist and science fiction writer Vernor Vinge called the creation of the first ultraintelligent machine the Singularity in the January 1993 Omni magazine. Vinge's follow-up paper entitled "What is the Singularity?" presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center( now NASA John H. Glenn Research Center at Lewis Field) and the Ohio Aerospace Institute, March 30-31, 1993, and  slightly changed in the Winter 1993 issue of Whole Earth Review, contained the oft-quoted statement,

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."

"Vinge refines his estimate of the time scales involved, adding, 'I'll be surprised if this event occurs before 2005 or after 2030.

"Vinge continues by predicting that superhuman intelligences, however created, will be able to enhance their own minds faster than the humans that created them. 'When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid.' This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period of time" (Wikipedia article on Technological singularity, accessed 05-24-2009).

View Map + Bookmark Entry

The First Defeat of a Human Champion by a Computer in a Game Compeition 1994

At the Second Man-Machine World Championship in 1994, Chinook, a computer checkers program developed around 1989 at the University of Alberta by a team led by Jonathan Schaeffer, won due to human frailty. This was the first time that a computer program defeated a human champion in a game competition.

 "In 1996 the Guinness Book of World Records recognized Chinook as the first program to win a human world championship" (http://webdocs.cs.ualberta.ca/~chinook/project/, accessed 01-24-2010).

View Map + Bookmark Entry

An Online Textbook of Cyberpsychology is Published January 1996

In January 1996 Psychologist John Suler of Rider University, Lawrenceville, New Jersey, published The Psychology of Cyberspace as an online hypertext book. This early hypertext book has been cited as a founding work in the developing fields of cyberspychology and cybertherapy, in which avatars assist with treatment.

View Map + Bookmark Entry

Kasparov Loses to Deep Blue: The First Time a Human Chess Player Loses to a Computer Under Tournament Conditions May 11, 1997

On May 11, 1997 Gary Kasparov, sometimes regarded as the greatest chess player of all time, resigned 19 moves into Game 6 against Deep Blue, an IBM RS/6000 SP supercomputer capable of calculating 200 million chess positions per second. This was the first time that a human world chess champion lost to a computer under tournament conditions.

The event, which took place at the Equitable Center in New York, was broadcast live from IBM's website via a Java viewer, and became the world's record "Net event" at the time.

"Since the emergence of artificial intelligence and the first computers in the late 1940s, computer scientists compared the performance of these 'giant brains' with human minds, and gravitated to chess as a way of testing the calculating abilities of computers. The game is a collection of challenging problems for minds and machines, but has simple rules, and so is perfect for such experiments.

"Over the years, many computers took on many chess masters, and the computers lost.

"IBM computer scientists had been interested in chess computing since the early 1950s. In 1985, a graduate student at Carnegie Mellon University, Feng-hsiung Hsu, began working on his dissertation project: a chess playing machine he called ChipTest. A classmate of his, Murray Campbell, worked on the project, too, and in 1989, both were hired to work at IBM Research. There, they continued their work with the help of other computer scientists, including Joe Hoane, Jerry Brody and C. J. Tan. The team named the project Deep Blue. The human chess champion won in 1996 against an earlier version of Deep Blue; the 1997 match was billed as a 'rematch.'

"The champion and computer met at the Equitable Center in New York, with cameras running, press in attendance and millions watching the outcome. The odds of Deep Blue winning were not certain, but the science was solid. The IBMers knew their machine could explore up to 200 million possible chess positions per second. The chess grandmaster won the first game, Deep Blue took the next one, and the two players drew the three following games. Game 6 ended the match with a crushing defeat of the champion by Deep Blue." 

"The AI crowd, too, was pleased with the result and the attention, but dismayed by the fact that Deep Blue was hardly what their predecessors had imagined decades earlier when they dreamed of creating a machine to defeat the world chess champion. Instead of a computer that thought and played chess like a human, with human creativity and intuition, they got one that played like a machine, systematically evaluating 200 million possible moves on the chess board per second and winning with brute number-crunching force. As Igor Aleksander, a British AI and neural networks pioneer, explained in his 2000 book, How to Build a Mind:  

" 'By the mid-1990s the number of people with some experience of using computers was many orders of magnitude greater than in the 1960s. In the Kasparov defeat they recognized that here was a great triumph for programmers, but not one that may compete with the human intelligence that helps us to lead our lives.'

"It was an impressive achievement, of course, and a human achievement by the members of the IBM team, but Deep Blue was only intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better" (Gary Kasparov, "The Chess Master and the Computer," The New York Review of Books, 57, February 11, 2010).

View Map + Bookmark Entry

The First "Advanced" or "Freestyle" or "Centaur" Chess Event June 1998

The first Advanced Chess event, in which each human player used a computer chess program to help him explore the possible results of candidate moves, was held in June 1998 in León, Spain. The match was played between Garry Kasparov, using the German chess program Fritz 5, and Veselin Topalov, using ChessBase 7.0. The analytical engines used, such as FritzHIARCS and Junior, were integrated into these two programs, and could have been called at a click of the mouse. It was a 6-game match, and it was arranged in advance that the players would consult the built-in million games databases only for the 3rd and 4th game, and would only use analytical engines without consulting the databases for the remaining games. The time available to each player during the games was 60 minutes. The match ended in a 3-3 tie.

Since the first event Advanced Chess matches were often called Freestyle chess, in which players can play without computer assistance, or can simply follow the directions of a computer program, or can play as a "centaur", listening to the moves advocated by the AI but occasionally overriding them. In 2014 the best Freestyle chess player was Intagrand, a team of humans and several different chess programs.

View Map + Bookmark Entry

2000 – 2005

The Film: "A. I. Artificial Intelligence" 2001

Steven Spielberg

The movie poster for A.I. Artificial Intelligence

Stanley Kubrick

In 2001 American director, screen writer and film producer Steven Spielberg directed, co-authored and produced, through DreamWorks and Amblin Entertainment, the science fiction film A.I. Artificial Intelligence, telling the story of David, an android robot child programmed with the ability to love and to dream. The film explored the hopes and fears involved with efforts to simulate human thought processes, and the social consequences of creating robots that may be better than people at specialized tasks.

The film was a 1970s project of Stanley Kubrick, who eventually turned it over to Spielberg. The project languished in development hell for nearly three decades before technology advanced sufficiently for a successful production. The film required enormously complex puppetry, computer graphics, and make-up prosthetics, which are well-described and explained in the supplementary material in the two-disc special edition of the film issued on DVD in 2002.

View Map + Bookmark Entry

"Minority Report": The Movie 2002

Steven Spielberg

The movie poster for Minority Report

The cover art for Minority Report by Philip Dick

Philip Dick

Steven Spielberg directed the science fiction 2002 film Minority Report, loosely based on the short story, "The Minority Report" by Philip K. Dick.

"It is set primarily in Washington, D.C. and Northern Virginia in the year 2054, where "Precrime", a specialized police department, apprehends criminals based on foreknowledge provided by three psychics called 'precogs'. The cast includes Tom Cruise as Precrime officer John Anderton, Colin Farrell as Department of Justice agent Danny Witwer, Samantha Morton as the senior precog Agatha, and Max von Sydow as Anderton's superior Lamar Burgess. The film has a distinctive look, featuring desaturated colors that make it almost resemble a black-and-white film, yet the blacks and shadows have a high contrast, resembling film noir."

"Some of the technologies depicted in the film were later developed in the real world – for example, multi-touch interfaces are similar to the glove-controlled interface used by Anderton. Conversely, while arguing against the lack of physical contact in touch screen phones, PC Magazine's Sascha Segan argued in February 2009, 'This is one of the reasons why we don't yet have the famous Minority Report information interface. In that movie, Tom Cruise donned special gloves to interact with an awesome PC interface where you literally grab windows and toss them around the screen. But that interface is impractical without the proper feedback—without actually being able to feel where the edges of the windows are' " (Wikipedia article on Minority Report [film] accessed 05-25-2009).

The two-disc special edition of the film issued on DVD in 2002 contained excellent supplementary material on the special digital effects.

View Map + Bookmark Entry

"Second Life" is Launched 2003

Linden Lab logo

An image from the Second Life game by Linden Lab

In 2003 Linden Lab of San Francisco, California, made publicly available the privately owned, partly subscription-based, virtual world called Second Life.

View Map + Bookmark Entry

The Actroid: A Humanoid Robot and Android November 2003 – 2007

Hiroshi Ishiguro

Osaka University entrance

Repliee Q2

In November 2003 Hiroshi Ishiguro (石黒浩 Ishiguro Hiroshi), director of the Intelligent Robotics Laboratory, part of the Department of Adaptive Machine Systems(知能・機能創成工学専攻) at Osaka University, Japan, developed the actroid, a humanoid robot and android with a lifelike appearance and visible behavior such as facial movements.

"In robot development, Professor Ishiguro concentrates on the idea of making a robot that is as similar as possible to a live human being; at the unveiling in July 2005 of the "female" android named Repliee Q1Expo, he was quoted as saying 'I have developed many robots before, but I soon realised the importance of its appearance. A human-like appearance gives a robot a strong feeling of presence. ... Repliee Q1Expo can interact with people. It can respond to people touching it. It's very satisfying, although we obviously have a long way to go yet.' In his opinion, it may be possible to build an android that is indistinguishable from a human, at least during a brief encounter" (Wikipedia article on Hiroshi Ishiguro, accessed 03-05-2011).

In 2007 Ishiguro described an android that resembles himself, called the Geminoid, but dubbed by Wired (April 2007) his 'Creepy Robot Doppelganger'. 

View Map + Bookmark Entry

2005 – 2010

Kosmix.com 2005

The original Kosmix.com search engine homepage

Venky Harinarayan

Anand Rajaraman

"With the vision of connecting people to information that makes a difference in their lives,"in 2005 Venky Harinarayan and Anand Rajaraman founded Kosmix.com in Mountain View, California.

View Map + Bookmark Entry

The CNN/ YouTube Presidential Debates: The First Internet to Television Debate Partnership July 23 – November 28, 2007

The CNN/YouTube presidential debates, the first web-to-television debate partnership, were a series of televised debates in which United States presidential hopefuls fielded questions submitted through the video sharing site YouTube. They were conceived by David Bohrman, then Washington Bureau Chief of CNN, and Steve Grove, then Head of News and Politics at YouTube. YouTube was then a new platform on the political scene, rising to prominence in the 2006 midterm elections after Senator George Allen's Macaca Controversy, in which the Senator was captured calling his opponent Jim Webb's campaign worker a "Macaca" on video, which went viral on YouTube and damaged a campaign that narrowly lost at the polls. Media companies were looking for new ways to harness the possibilities of web video and YouTube was looking for opportunities to give its users access to the national political stage, so Bohrman and Grove formed a unique partnership in the CNN/YouTube DebatesThe Democratic Party installment took place in Charleston, South Carolina and aired on July 23, 2007. The Republican Party installment took place in St. Petersburg, Florida and aired on November 28, 2007. 

View Map + Bookmark Entry

Anthony Grafton's "Codex in Crisis" November 5, 2007 – 2008

Anthony Grafton

The cover of Codex in Crisis

On November 5, 2007 historian Anthony Grafton of Princeton University published "Future Reading. Digitization and its Discontents" in The New Yorker Magazine. This was revised and reissued as a small book entitled Codex in Crisis (2008). It was reprinted as the last chapter in Grafton's, Worlds Made by Words. Scholarship and Community in the Modern West (2009).

On December 18, 2008 Grafton spoke about Codex in Crisis at Google, Montain View, in the Authors@Google series:

View Map + Bookmark Entry

The First iPhone and iPad Apps for the Visually Impaired 2009 – 2010

Because of the convenience of carrying smart phones it was probably inevitable that their features would be applied to support the visually impaired. iBlink Radio introduced in July 2010 by Serotek Corporation of Minneapolis, Minnesota, calls itself the first iOS application for the visually impaired. It provides access to radio stations, podcasts and reading services of special interest to blind and visually impaired persons, as well as their friends, family, caregivers and those wanting to know what life is like without eyesight.

SayText, also introduced in 2010 by Haave, Inc. of Vantaa, Finland, reads out loud text that is photographed by a cell phone camera.

VisionHunt, by VI Scientific of Nicosia, Cyprus, introduced in 2009, is a vision aid tool for the blind and the visually impaired that uses the phone’s camera to detect colors, paper money and light sources. VisionHunt identifies about 30 colors. It also detects 1, 5, 10, 20, 50 US Dollar bills. Finally, VisionHunt detects sources of light, such as switched-on lamps or televisions. VisionHunt is fully accessible to the blind and the visually impaired through Voice Over or Zoom.

Numerous other apps for the visually impaired were introduced after the above three.

View Map + Bookmark Entry

2010 – 2012

The First Fragment of Contemporary Classical Music Composed by a Computer in its Own Style October 15, 2010

On October 15, 2010 the Iamus computer cluster developed by Francisco Vico and associates at the Universidad de Málaga, using the Melomics system, composed Opus One. This composition was arguably the first fragment of professional contemporary classical music ever composed by a computer in its own style, rather than emulating the style of existing composers.

"Melomics (derived from the genomics of melodies) is a propietary computational system for the automatic composition of music (with no human intervention), based on bioinspired methods and commercialized by Melomics Media" (Wikipedia article on Melomics, accessed 11-13-2013).

View Map + Bookmark Entry

Voice-Activated Translation on Cell Phones January 12, 2011

Google introduced an improved Google Translate for Android Conversation Mode: 

"This is a new interface within Google Translate that’s optimized to allow you to communicate fluidly with a nearby person in another language. You may have seen an early demo a few months ago, and today you can try it yourself on your Android device.  

"Currently, you can only use Conversation Mode when translating between English and Spanish. In conversation mode, simply press the microphone for your language and start speaking. Google Translate will translate your speech and read the translation out loud. Your conversation partner can then respond in their language, and you’ll hear the translation spoken back to you. Because this technology is still in alpha, factors like regional accents, background noise or rapid speech may make it difficult to understand what you’re saying. Even with these caveats, we’re excited about the future promise of this technology to be able to help people connect across languages" (http://googleblog.blogspot.com/2011/01/new-look-for-google-translate-for.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+blogspot/MKuf+(Official+Google+Blog), accessed 01-14-2011.

View Map + Bookmark Entry

How Search Engines Have Become a Primary Form of External or Transactive Memory July 14, 2011

Betsy Sparrow of Columbia University, Jenny Liu, and Daniel M. Wegner of Harvard University published "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips," published online 14 July 2011, Science 5 August 2011: Vol. 333 no. 6043 pp. 776-778 DOI: 10.1126/science.1207745.


"The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves."

First two paragraphs (footnotes removed):

"In a development that would have seemed extraordinary just over a decade ago, many of us have constant access to information. If we need to find out the score of a ball game, learn how to perform a complicated statistical test, or simply remember the name of the actress in the classic movie we are viewing, we need only turn to our laptops, tablets, or smartphones and we can find the answers immediately. It has become so commonplace to look up the answer to any question the moment it occurs that it can feel like going through withdrawal when we can’t find out something immediately. We are seldom offline unless by choice, and it is hard to remember how we found information before the Internet became a ubiquitous presence in our lives. The Internet, with its search engines such as Google and databases such as IMDB and the information stored there, has become an external memory source that we can access at any time.

"Storing information externally is nothing particularly novel, even before the advent of computers. In any long-term relationship, a team work environment, or other ongoing group, people typically develop a group or transactive memory (1), a combination of memory stores held directly by individuals and the memory stores they can access because they know someone who knows that information. Like linked computers that can address each other’s memories, people in dyads or groups form transactive memory systems (2, 3). The present research explores whether having online access to search engines, databases, and the like, has become a primary transactive memory source in itself. We investigate whether the Internet has become an external memory system that is primed by the need to acquire information. If asked the question whether there are any countries with only one color in their flag, for example, do we think about flags or immediately think to go online to find out? Our research then tested whether, once information has been accessed, our internal encoding is increased for where the information is to be found rather than for the information itself."

An article by Alexander Bloom published in Harvard Magazine, November 2011 had this to say regarding the research:

"Wegner, the senior author of the study, believes the new findings show that the Internet has become part of a transactive memory source, a method by which our brains compartmentalize information. First hypothesized by Wegner in 1985, transactive memory exists in many forms, as when a husband relies on his wife to remember a relative’s birthday. '[It is] this whole network of memory where you don’t have to remember everything in the world yourself,' he says. 'You just have to remember who knows it.' Now computers and technology as well are becoming virtual extensions of our memory. The idea validates habits already forming in our daily lives. Cell phones have become the primary location for phone numbers. GPS devices in cars remove the need to memorize directions. Wegner points out that we never have to stretch our memories too far to remember the name of an obscure movie actor or the capital of Kyrgyzstan—we just type our questions into Google. 'We become part of the Internet in a way,' he says. 'We become part of the system and we end up trusting it.' "(http://harvardmagazine.com/2011/11/how-the-web-affects-memory, accessed 12-11-2011).

View Map + Bookmark Entry

The First Complete Album Composed Solely by Computer and Recorded by Human Musicians September 2011 – July 2, 2012

In September 2011 the Iamus computer cluster developed by Francisco Vico and associates at the Universidad de Málaga produced a composition entitled Hello World! This classical clarinet-volin-piano trio was called the first full-scale work entirely composed by a computer without any human intervention, and automatically written in a fully-fledged score using conventional musical notation.

Several months later, on July 2, 2012 four compositions by the Iamus computer premiered, and were broadcast live from the School of Computer Science at Universidad de Málaga, as one of the events included in the Alan Turing year. The compositions performed at this event were later recorded by the London Symphony Orchestra, and issued in 2012 as the album entitled Iamus. This compact disc was characterized by the New Scientist as the "first complete album to be composed solely by a computer and recorded by human musicians."

Commenting on the authenticity of the music, Stephen Smoliar, critic of classical music at The San Francisco Examiner, wrote in a piece entitled "Thoughts about Iamus and the composition of music by computer," Examiner.com, January 4, 2013:

"However, where listening is concerned, the method leading to the notation is secondary. What is primary is the act of making the music itself engaged by the performers and how the listener responds to what those performers do. Put another way, the music is in the performance, rather than in the composition without which that performance would not take place. The issue is not, as Smith seems to imply at the end of her BBC report, whether 'a computer could become a more prodigious composer than Mozart, Haydn, Brahms and Beethoven combined.' The computer is only prodigious at creating more documents, and what is most interesting about the documents generated by Iamus is their capacity to challenge the creative talents of performing musicians."

View Map + Bookmark Entry

Amazon Introduces the Kindle Fire September 28 – November 14, 2011

On September 28, 2011 Amazon announced the Kindle Fire, a tablet computer version of Amazon.com's Kindle e-book reader, with a  7" color multi-touch display with IPS technology, running a forked version of Google's Android operating system. The device, which included access to the Amazon Appstore, streaming movies and TV shows, and Kindle's e-books, was released on November 14, 2011 for $199.

In January 2012 Amazon advertised that there were 19 million movies, TV shows, songs, magazines, and books available for the Kindle Fire.

View Map + Bookmark Entry

"Zero to Eight: Children's Media Use in America" October 25, 2011

On October 25, 2011 Common Sense Media of San Francisco issued Zero to Eight: Children's Media Use in America by Vicky Rideout. Some of the key findings of their report were:

"Even very young children are frequent digital media users.

"MOBILE MEDIA. Half (52%) of all children now have access to one of the newer mobile devices at home: either a smartphone (41%) a video iPod (21%), or an iPad or ther tablet device (8%). More than a quarter (29%) of all parents have downloaded 'apps'. . . for their children to use. And more than a third (36%) of children have ever used one of these new mobile devices, including 10% of 0-to 1-year-olds, 39% of 2-to 4-year-olds, and 52% of 5- to 8-year-olds. In a typical day 11% of all 0-to 8 year-year olds use a cell phone, iPod, iPad, or similar device for media consumption and those who do spend an average of :43 doing so.  

"COMPUTERS. Computer use is pervasive among very young children, with half (53%) of all 2- to 4-year-olds having ever used a computer, and nine out of ten (90%) 5- to 8-year-olds having done so. For many of these children, computer use is a regular occurrence: 22% of 5 to 8-year olds use a computer at least once a day and another 46% use it at least once a week. Even among 2- to 4-year-olds, 12% use a computer every day, with another 24% doing so at least once a week. Among all children who have used a computer, the average age of first use was just 3 1/2 years old.

"VIDEO GAMES. Playing console video games is also popular among these young children: Half (51%) of all 0- to 8-year-olds have ever played a console video game, including 44% of 2- to 4-year-olds and
81% of 5- to 8-year-olds. Among those who have played console video games, the average age at first use was just under 4 years old (3 years and 11 months). Among 5- to 8-year-olds, 17% play console
video games at least once a day, and another 36% play them at least once a week. . . .

"Children under 2 spend twice as much time watching
TV and videos as they do reading books.

"In a typical day, 47% of babies and toddlers ages 0 through 1 watch TV or DVDs, and those who do watch spend an average of nearly two hours (1:54) doing so. This is an average of :53 among all children
in this age group, compared to an average of :23 a day reading or being read to. Nearly one in three (30%) has a TV in their bedroom. In 2005, among children ages 6-23 months, 19% had a TV in their
bedroom. Looking just at 6- to 23-month-olds in the current study, 29% have a TV in their bedroom. . . .

"Media use varies significantly by race and socio-economic status, but not much by gender.

"RACE AND SOCIO-ECONOMIC STATUS. African- American children spend an average of 4:27 a day with media (including music, reading, and screen media), compared to 2:51 among white children and 3:28 among Hispanics. Children from higher- income families or with more highly educated parents spend less time with media than other children do (for example, 2:47 a day among higher-income children vs. 3:34 among lower-income youth). Twenty percent of children in upper income homes have a TV in their bedroom, compared to 64% of those from lower- income homes. 

"GENDER. The only substantial difference between boys’ and girls’ media use is in console video games. Boys are more likely to have ever played a console video game than girls are (56% vs. 46%), to have a video game player in their bedroom (14% vs. 7%), and to play console video games every day (14% vs. 5%). Boys average :16 a day playing console games, compared to an average of :04 a day for girls."

View Map + Bookmark Entry

2012 – 2016

Apple Introduces iBooks 2, iBooks Author, and iTunes U January 19, 2012

Apple released iBooks 2, a free app to support digital textbooks that could display interactive diagrams, audio and video. At a news conference at the Guggenheim Museum in New York the company demonstrated a biology textbook featuring 3-D models, searchable text, photo galleries and flash cards for studying. Apple said high school textbooks from its initial publishing partners, including Pearson, McGraw-Hill and Houghton Mifflin Harcourt, would cost $15 or less.  

"Apple also announced a free tool called iBooks Author, a piece of Macintosh software that allows people to make these interactive textbooks. The tool includes templates designed by Apple, which publishers and authors can customize to suit their content. It requires no programming knowledge and will be available Thursday. 

"The company also unveiled the iTunes U app for the iPad, which allows teachers to build an interactive syllabus for their coursework. Students can load the syllabus in iTunes U and, for example, tap to open an electronic textbook and go directly to the assigned chapter. Teachers can use iTunes U to create full online courses with podcasts, video, documents and books" (http://bits.blogs.nytimes.com/2012/01/19/apple-unveils-tools-for-digital-textbooks/?nl=technology&emc=cta4, accessed 01-19-2012). 

View Map + Bookmark Entry

Windows 8, With Touch Screen Features, is Released October 26, 2012

On October 26, 2012 Microsoft released the Windows 8 operating system to the general public. Development of Windows 8 started in 2009 before the release of its predecessor, Windows 7, the last iteration of Windows designed primarily for desktop computers. Windows 8 introduced very significant changes primarily focused toward mobile devices, tablets and cell phones which use touch screens, and:

"to rival other mobile operating systems like Android and iOS, taking advantage of new or emerging technologies like USB 3.0, UEFI firmware, near field communications, cloud computing and the low-power ARM architecture, new security features such as malware filtering, built-in antivirus capabilities, a new installation process optimized for digital distribution, and support for secure boot (a UEFI feature which allows operating systems to be digitally signed to prevent malware from altering the boot process), the ability to synchronize certain apps and settings between multiple devices, along with other changes and performance improvements. Windows 8 also introduces a new shell and user interface based on Microsoft's "Metro" design language, featuring a new Start screen with a grid of dynamically updating tiles to represent applications, a new app platform with an emphasis on touchscreen input, and the new Windows Store to obtain and/or purchase applications to run on the operating system" (Wikipedia article on Windows 8, accessed 12-14-2012).

On December 13, 2012 MIT's technologyreview.com published an interview with Julie Larson-Green, head of product development at Microsoft, in which Larson-Green explained why Microsoft decided it was necessary to rethink and redesign in a relatively radical manner the operating system used by 1.2 billion people:

Why was it necessary to make such broad changes in Windows 8?

"When Windows was first created 25 years ago, the assumptions about the world and what computing could do and how people were going to use it were completely different. It was at a desk, with a monitor. Before Windows 8 the goal was to launch into a window, and then you put that window away and you got another one. But with Windows 8, all the different things that you might want to do are there at a glance with the Live Tiles. Instead of having to find many little rocks to look underneath, you see a kind of dashboard of everything that’s going on and everything you care about all at once. It puts you closer to what you’re trying to get done. 

Windows 8 is clearly designed with touch in mind, and many new Windows 8 PCs have touch screens. Why is touch so important? 

"It’s a very natural way to interact. If you get a laptop with a touch screen, your brain clicks in and you just start touching what makes it faster for you. You’ll use the mouse and keyboard, but even on the regular desktop you’ll find yourself reaching up doing the things that are faster than moving the mouse and moving the mouse around. It’s not like using the mouse, which is more like puppeteering than direct manipulation. 

In the future, are all PCs going to have touch screens? 

"For cost considerations there might always be some computers without touch, but I believe that the vast majority will. We’re seeing that the computers with touch are the fastest-selling right now. I can’t imagine a computer without touch anymore. Once you’ve experienced it, it’s really hard to go back.

Did you take that approach in Windows 8 as a response to the popularity of mobile devices running iOS and Android? 

"We started planning Windows 8 in June of 2009, before we shipped Windows 7, and the iPad was only a rumor at that point. I only saw the iPad after we had this design ready to go. We were excited. A lot of things they were doing about mobile and touch were similar to what we’d been thinking. We [also] had differences. We wanted not just static icons on the desktop but Live Tiles to be a dashboard for your life; we wanted you to be able to do things in context and share across apps; we believed that multitasking is important and that people can do two things at one time. 

Can touch coexist with a keyboard and mouse interface? Some people have said it doesn’t feel right to have both the newer, touch-centric elements and the old-style desktop in Windows 8. /

"It was a very definite choice to have both environments. A finger’s never going to replace the precision of a mouse. It’s always going to be easier to type on a keyboard than it is on glass. We didn’t want you to have to make a choice. Some people have said that it’s jarring, but over time we don’t hear that. It’s just getting used to something that’s different. Nothing was homogenous to start with, when you were in the browser it looked different than when you were in Excel."

View Map + Bookmark Entry

An Innovative Interactive Museum Gallery Space with the Largest Multi-Touch Screen in the United States January 21, 2013

On January 21, 2013 The Cleveland Museum of Art opened Gallery One, an interactive gallery "that blends art, technology and interpretation to inspire visitors to explore the museum’s renowned collections. This revolutionary space features the largest multi-touch screen in the United States, which displays images of over 3,500 objects from the museum’s world-renowned permanent collection. This 40-foot Collection Wall allows visitors to shape their own tours of the museum and to discover the full breadth of the collections on view throughout the museum’s galleries. Throughout the space, original works of art and digital interactives engage visitors in new ways, putting curiosity, imagination and creativity at the heart of their museum experience. Innovative user-interface design and cutting-edge hardware developed exclusively for Gallery One break new ground in art museum interpetation, design and technology"

View Map + Bookmark Entry

Drone Pilots Experience Stress Possibly Greater than Actual Combat Pilots February 23, 2013

"In the first study of its kind, researchers with the Defense Department have found that pilots of drone aircraft experience mental health problems like depression, anxiety and post-traumatic stress at the same rate as pilots of manned aircraft who are deployed to Iraq or Afghanistan.

"The study affirms a growing body of research finding health hazards even for those piloting machines from bases far from actual combat zones.  

“ 'Though it might be thousands of miles from the battlefield, this work still involves tough stressors and has tough consequences for those crews,' said Peter W. Singer, a scholar at the Brookings Institution who has written extensively about drones. He was not involved in the new research.  

"That study, by the Armed Forces Health Surveillance Center, which analyzes health trends among military personnel, did not try to explain the sources of mental health problems among drone pilots.  

"But Air Force officials and independent experts have suggested several potential causes, among them witnessing combat violence on live video feeds, working in isolation or under inflexible shift hours, juggling the simultaneous demands of home life with combat operations and dealing with intense stress because of crew shortages. 'Remotely piloted aircraft pilots may stare at the same piece of ground for days,' said Jean Lin Otto, an epidemiologist who was a co-author of the study. 'They witness the carnage. Manned aircraft pilots don’t do that. They get out of there as soon as possible.'  

"Dr. Otto said she had begun the study expecting that drone pilots would actually have a higher rate of mental health problems because of the unique pressures of their job.  

"Since 2008, the number of pilots of remotely piloted aircraft — the Air Force’s preferred term for drones — has grown fourfold, to nearly 1,300. The Air Force is now training more pilots for its drones than for its fighter jets and bombers combined. And by 2015, it expects to have more drone pilots than bomber pilots, although fighter pilots will remain a larger group.

"Those figures do not include drones operated by the C.I.A. in counterterrorism operations over Pakistan, Yemen and other countries" (http://www.nytimes.com/2013/02/23/us/drone-pilots-found-to-get-stress-disorders-much-as-those-in-combat-do.html?hpw&_r=0, accessed 02-23-2013).

View Map + Bookmark Entry

Smartphone Interactive Reading Device Will Track Eyes to Scroll Pages March 4, 2013

A much-anticipated new smartphone by Samsung, the South Korean multinational conglomerate headquartered in Samsung Town, Seoul, purports to incorporate a radically new interactive reading device:

"Samsung’s next big smartphone, to be introduced this month, will have a strong focus on software. A person who has tried the phone, called the Galaxy S IV, described one feature as particularly new and exciting: Eye scrolling.

"The phone will track a user’s eyes to determine where to scroll, said a Samsung employee who spoke on condition of anonymity because he was not authorized to speak to the news media. For example, when users read articles and their eyes reach the bottom of the page, the software will automatically scroll down to reveal the next paragraphs of text.

"The source would not explain what technology was being used to track eye movements, nor did he say whether the feature would be demonstrated at the Galaxy S IV press conference, which will be held in New York on March 14. The Samsung employee said that over all, the software features of the new phone outweighed the importance of the hardware.

"Samsung’s booth at this year’s Mobile World Congress. Indeed, Samsung in January filed for a trademark in Europe for the name “Eye Scroll” (No. 011510674). It filed for the “Samsung Eye Scroll” trademark in the United States in February, where it described the service as “Computer application software having a feature of sensing eye movements and scrolling displays of mobile devices, namely, mobile phones, smartphones and tablet computers according to eye movements; digital cameras; mobile telephones; smartphones; tablet computers" (http://bits.blogs.nytimes.com/2013/03/04/samsungs-new-smartphone-will-track-eyes-to-scroll-pages/?hp, accessed 03-05-2013).

When I wrote this entry in March 2013 the Wikipedia article on Samsung stated that Samsung Electronics was the "world's largest information technology company" measured by 2012 revenues. It had retained the number one position since 2009. It was also the world's largest producer of mobile phones, and the world's second largest semiconductor producer after Intel Corporation.

View Map + Bookmark Entry

Google Introduces "Google Glass" Explorer Edition April 15, 2013

On April 15, 2013 Google introduced Google Glass, an optical head-mounted display (OHMD) wearable computer. The augmented reality device displays information in a smartphone-like hands-free format. Wearers communicate with the Internet via natural language voice commands. Google started selling Google Glass to qualified "Glass Explorers" in the US on April 15, 2013 for a limited period for $1,500, before it became available to the public on May 15, 2014 for the same price.

View Map + Bookmark Entry

"As We May Type": Authoring Tools as "Machines for New Thought." October 16, 2013

On October 16, 2013 writer and computer programmerPaul Ford published in MIT Technology Review an article entitled, "As We May Type." The subheading of this article was "New outliners and authoring tools are machines for new thoughts." This article discussed the issue of how new outlining and writing tools impact the human creative writing process. From this I quote a couple of paragraphs:

"Outlines are a kind of mental tree. Say level 1 is a line of text. Then level 1.1 would be subordinate to 1, and 1.1.1 subordinate to 1.1; 1.2, like 1.1, is subordinate to the first line. And so forth. Of course, outlines existed before software. (The philosopher Ludwig Wittgenstein composed an entire book, the Tractatus Logico-Philosophicus, as an outline.)

"But with an outlining program, you don’t need a clumsy numbering system, because the computer does the bookkeeping for you. You can build hierarchies, ideas branching off ideas, with words like leaves. You can hide parts of outlines as you’re working, to keep the document manageable. And on a computer, any element can be exported to another program for another use. Items can become sections in a PhD thesis—or slides in a presentation, or blog posts. Or you could take your outline tree and drop it inside another outline, building a forest."


View Map + Bookmark Entry

Zero to Eight: Children's Media Use in America 2013 October 28, 2013

On October 28, 2013 Common Sense Media of San Francisco issued their two year follow-up to their study of October 2011Zero to Eight: Children's Media Use in America 2013 by Vicky Rideout. Key findings in the 2013 report were:

"Children’s access to mobile media devices is dramatically higher than it was two years ago.

"Among families with children age 8 and under, there has been a five-fold increase in ownership of tablet devices such as iPads, from 8% of all families in 2011 to 40% in 2013. The percent of children with access to some type of 'smart' mobile device at home (e.g., smartphone, tablet) has jumped from half (52%) to three-quarters (75%) of all children in just two years.

"Almost twice as many children have used mobile media compared to two years ago, and the average amount of time children spend using mobile devices has tripled.

"Seventy-two percent of children age 8 and under have used a mobile device for some type of media activity such as playing games, watching videos, or using apps, up from 38% in 2011. In fact, today, 38% of children under 2 have used a mobile device for media (compared to 10% two years ago). The percent of children who use mobile devices on a daily basis – at least once a day or more – has more than doubled, from 8% to 17%. The amount of time spent using these devices in a typical day has tripled, from an average of :05 a day among all children in 2011 up to :15 a day in 2013. [Throughout the report, times are presented in hours:minutes format. For example, “1:46” indicates one hour and 46 minutes.] The difference in the average time spent with mobile devices is due to two factors: expanded access, and the fact that those who use them do so for longer periods of time. Among those who use a mobile device in a typical day, the average went from :43 in 2011 to 1:07 in 2013."

View Map + Bookmark Entry

Monkeys Use Brain-Machine Interface to Move Two Virtual Arms with their Brain Activity November 6, 2013

In a study led by neuroscientist Miguel A. L. Nicolelis and the Nicolelis Lab at Duke University, monkeys learned to control the movement of both arms on an avatar using just their brain activity. The findings, published on November 6, 2013 in Science Translational Medicine, advanced efforts to develop bilateral movement in brain-controlled prosthetic devices for severely paralyzed patients, and raised the hope that patients might eventually be able to use brain-machine interfaces (BMIs) to control two arms. To enable the monkeys to control two virtual arms, researchers recorded nearly 500 neurons from multiple areas in both cerebral hemispheres of the animals’ brains, the largest number of neurons recorded and reported to date.

"While the monkeys were moving two hands, the researchers saw distinct patterns of neuronal activity that differed from the activity seen when a monkey moved each hand separately. Through such research on brain–machine interfaces, scientists may not only develop important medical devices for people with movement disorders, but they may also learn about the complex neural circuits that control behavior....

“Simply summing up the neuronal activity correlated to movements of the right and left arms did not allow us to predict what the same individual neurons or neuronal population would do when both arms were engaged together in a bimanual task,” said Nicolelis in a released statement. “This finding points to an emergent brain property – a non-linear summation – for when both hands are engaged at once" (www.technologyreview.com/view/521471/monkeys-drive-two-virtual-arms-with-their-thoughts/, accessed 11-09-2013).

P. J. Ifft, S. Shokur, Z. Li, M. A. Lebedev, M. A. L. Nicolelis,"A Brain-Machine Interface Enables Bimanual Arm Movements in Monkeys", Sci. Transl. Med. 5210ra154 (2013).

View Map + Bookmark Entry