4406 entries. 94 themes. Last updated December 26, 2016.

Computers & Society Timeline

Theme

1600 – 1650

Galileo Presents One of the First Records of Litigation over an Invention 1607

In 1607 Galileo Galilei issued from Venice at the press of Tomaso Baglioni Difesa di Galileo Galilei ... contro alle calumnie & imposture di Baldessar Capra. This booklet published the transcript of the trial resulting from the lawsuit that Galileo successfully brought against Baldessar Capra for copying the proportional and military compass that Galileo had invented. It was among the first, if not the very first, record of litigation over an invention, and most certainly the first litigation in the history of computing.

View Map + Bookmark Entry

1750 – 1800

Computing the Nautical Almanac, Called the "Seaman's Bible" 1766

In 1766 the British Government sanctioned Nevil Maskelyne, the Astronomer Royal, to produce each year a set of navigational tables, to be called the Nautical Almanac. This was the first permanent mathematical table-making project in the world.

Known as the "Seaman's Bible," the Nautical Almanacs, first published in 1767, greatly improved the accuracy of navigation. However, the accuracy of the tables in the Nautical Almanacs was dependent upon the accuracy of the human computers who produced them, working by hand and separated geographically in an early example of organized but distant collaboration.

By the early nineteenth century, the time of Charles Babbage, these tables became notorious for their errors, providing Babbage the incentive to develop mechanical systems, which he called calculating engines, to improve their accuracy.

(This entry was last revised on 05-02-2016.)

View Map + Bookmark Entry

1800 – 1850

The Thomas Arithmometer, the First Commercially Produced Mechanical Calculator 1820

Charles Xavier Thomas' Arithometer.

Charles Xavier Thomas

In 1820 Charles Xavier Thomas of Alsace, an entrepreneur in the insurance industry, invented the arithmometer, the first commercially produced adding machine, presumably to speed up and make more accurate, the enormous amount of daily computation insurance companies required. Remarkably, according to the Wikipedia, Thomas received almost immediate acknowledgement for this invention, as he was made Chevalier of the Legion of Honor only one year later, in 1821.  At this time he changed his name to Charles Xavier Thomas, de Colmar, later abbreviated to Thomas de Colmar.

"Initially Thomas spent all of his time and energy on his insurance business, therefore there is a hiatus of more than thirty years in between the first model of the Arithmometer introduced in 1820 and its true commercialization in 1852. By the time of his death in 1870, his manufacturing facility had built around 1,000 Arithmometers, making it the first mass produced mechanical calculator in the world, and at the time, the only mechanical calculator reliable and dependable enough to be used in places like government agencies, banks, insurance companies and observatories just to name a few. The manufacturing of the Arithmometer went on for another 40 years until around 1914" (Wikipedia article on Charles Xavier Thomas, accessed 10-10-2011).

The success of the Arithmometer, which to a certain extent paralleled Thomas's success in the insurance industry, was, of course, in complete contrast to the problems that Charles Babbage faced with producing and gaining any acceptance for his vastly more sophisticated, complex, ambitious and expensive calculating engines during roughly the same time frame. Thomas, of course, produced an affordable product that succeeded in speeding up basic arithmetical operations essential to the insurance industry while Babbage's scientific and engineering goals initially of making mathematical tables more accurate, and later, of automating mathematical operations in general, did not attempt to meet a recognized industrial demand. 

"The [Arithmometer] mechanism has three parts, concerned with setting, counting, and recording respectively. Any number up to 999,999 may be set by moving the pointers to the numbers 0 to 9 engraved next to the six slots on the fixed cover plate. The movement of any of these pointers slides a small pinion with ten teeth along a square axle, underneath and to the left of which is a Leibniz stepped wheel.  

"The Leibniz wheel, a cylinder having nine teeth of increasing length, is driven from the main shaft by means of a bevel wheel, and the small pinion is thus rotated by as many teeth as the cylinder bears in the plane corresponding to the digit set. This amount of rotation is transferred through one of a pair of bevel wheels, carried on a sleeve on the same axis, to the ‘results’ figure wheel on the back row on the hinged plate. This plate also carried the figure wheel recording the number of turns of the driving crank for each position of the hinged plate. The pair of bevel wheels is placed in proper gear by setting a lever at the top left-hand cover to either "Addition and Multiplication" or "Subtraction and Division." The ‘results’ figure wheel is thereby rotated anti-clockwise or clockwise respectively.  

"Use. Multiplying 2432 by 598 may be performed as follows: Lift the hinged plate, turn and release the two milled knobs to bring all the figure wheels to show zero; lower the hinged plate in its position to the extreme left; set the number 2432 on the four slots on the fixed plate; set the lever on the left to "multiplication" and turn the handle eight times; lift the hinged plate, slide it one step to the right, and lower it into position; turn the handle nine times; step the plate one point to the right again and the turn the handle five times. The product 1,454,336 will then appear on the top row, and the multiplier 598 on the next row of figures" (From Gordon Bell's website, accessed 10-12-2011).

View Map + Bookmark Entry

1850 – 1875

Geroge Parker Bidder, One of the Most Remarkable Human Computers 1856

In 1856 George Parker Bidder, an engineer and one of the most remarkable human computers of all time, published his paper on Mental Calculation. (See Reading 3.1)

View Map + Bookmark Entry

Babbage's "Passages from the Life of a Philosopher" 1864

In 1864 English mathematician, engineer and computer designer Charles Babbage published his autobiography, Passages from the Life of a Philosopher, in which he presented the most detailed descriptions of his Difference and Analytical Enginespublished during his lifetime, and wrote about his struggles to have his highly futuristic inventions appreciated by society.

In the wording of his title Babbage used the word philosopher in its now obsolete sense of what we call a "scientist." The word scientist, coined by William Whewell, was not widely used until the end of the 19th or early 20th century. (See Reading 6.2.)

View Map + Bookmark Entry

1875 – 1900

The Earliest Exhibition Exclusively of Scientific Instruments 1876

The earliest international exposition exclusively of scientific instruments was held at the South Kensington Museum, London in 1876.  As a record of the exhibition the South Kensington Museum published a Handbook to the Special Loan Collection of Scientific Apparatus 1876 (London 1876). The section on calculating machines on pages 23-34 was written by H. J. S. Smith, and included those of Babbage, Scheutz, Thomas de Colmar, and Grohmann. None were illustrated. James Clerk Maxwell contributed two chapters in this guide, Peter Guthrie Tait wrote one, and Thomas Henry Huxley wrote one.  A French translation of this work was published in Paris also in 1876.

The South Kensington Museum was later merged into the Science Museum in London.

Hook & Norman, Origins of Cyberspace 369.

View Map + Bookmark Entry

300 Clerks Reviewing 2,500,000 Insurance Policies with 24 Calculators 1877

In 1877 it took three hundred clerks working at The Prudential six months to review its 2,500,000 insurance policies, with the assistance of twenty-four Charles Xavier Thomas de Colmar arithmometers.

View Map + Bookmark Entry

Publication of the Tables of de Prony 1891

In 1891 the logarithmic and trigonometric tables of Gaspard Riche de Prony, compiled in 19 volumes of manuscript, mostly by hairdressers unemployed after the French Revolution, were finally published in an abbreviated form in one volume. They were the most monumental work of calculation ever carried out by human computers.

France. Service Geographique de l'Armee. Tables des logarithmes a huit decimales des nombres entiers de 1 a 120000 et des sinus et tangentes de dix secondes en dix secondes d'arc dans le systeme de la division centesimale du quadrant. Paris: Imprimerie Nationale, 1891.

Hook & Norman, Origins of Cyberspace (2001) no. 301.

View Map + Bookmark Entry

1920 – 1930

Introduction of the Word "Robot" 1920

In 1920 Czech novelist, playwright, journalist and translator Karel Capek published R. U. R. (Rossum’s Universal Robots) in Prague. This play, written in Czech except for the title, introduced the word “robot” and explored the issue of whether worker-machines would replace people.

View Map + Bookmark Entry

1930 – 1940

The Social Security Program Creates a Giant Data-Processing Challenge 1935 – 1936

The Social Security Act of 1935 required the U. S. government to keep continuous records on the employment of 26 million individuals.

The first  Social Security Numbers (SSNs) were issued by the Social Security Administration in November 1936 as part of the New Deal Social Security program.

"Within three months, 25 million numbers were issued.

"Before 1986, people often did not have a Social Security number until the age of about 14, since they were used for income tracking purposes, and those under that age seldom had substantial income. In 1986, American taxation law was altered so that individuals over 5 years old without Social Security numbers could not be successfully claimed as dependents on tax returns; by 1990 the threshold was lowered to 1 year old, and was later abolished altogether." (Wikipedia article on Social Security Number, accessed 01-17-2010).

View Map + Bookmark Entry

1940 – 1950

Key Events in the Development of the UNIVAC, the First Electronic Computer Widely Sold in the United States April 24, 1947 – November 4, 1952

On April 24, 1947 the Electronic Control Company (Pres Eckert and John Mauchly) in Philadelphia developed the tentative instruction code C-1 for what they called  “a Statistical EDVAC.” This was the earliest document on the programming of an electronic digital computer intended for commercial use. On May 24, 1947 they renamed the planned “Statistical EDVAC” the UNIVAC. About November 1947 Electronic Control Company  issued the first brochure advertising the UNIVAC —the first sales brochure ever issued for an electronic digital computer. A special characteristic of the brochure was that it did not show the product, since at this time the product was not yet fully conceptualized either in design or external appearance. 

On October 31, 1947 Eckert and Mauchly applied for a U.S. patent on the mercury acoustic delay-line electronic memory system. This was the "first device to gain widespread acceptance as a reliable computer memory system." (Hook & Norman, Origins of Cyberspace [2002]1191).  The patent 2,629,827 was granted in 1953.

In 1948 a contract was drawn up between the renamed company, Eckert-Mauchly Computer Corporation, and the United States Census Bureau for the production of the UNIVAC. On October 31, 1947 Pres Eckert and John Mauchly of Philadelphia applied for a U.S. patent on the mercury acoustic delay-line electronic memory system. This was the "first device to gain widespread acceptance as a reliable computer memory system." (Hook & Norman, Origins of Cyberspace [2002]1191). The patent 2,629,827 was granted in 1953.

As the first UNIVAC was being developed, in 1949 Betty Holbertson  developed the UNIVAC Instructions Code C-10. C-10 was the first software to allow a computer to be operated by keyboarded commands rather than dials and switches. It was also the first mnemonic code. Also in 1949, Grace Hopper left the Harvard Computation Laboratory to join Eckert-Mauchly Computer Corporation as a senior mathematician/programmer. In June 1949 John Mauchly conceived the Short Code—the first high-level programming language for an electronic computer—to be used with the BINAC. It was also the first interpreted language and the first assembly language. The Short Code first ran on UNIVAC I, serial 1, in 1950. [In 2005 no copies of the Short Code existed with dates earlier than 1952.]

UNIVAC I, serial 1, was signed over to the United States Census Bureau on March 31, 1951. The official dedication of the machine at the government offices occurred on June 14, 1951. Excluding the unique BINAC, the UNIVAC I was the first electronic computer to be commercially manufactured in the United States. Its development preceded the British Ferranti Mark 1; however, the British machine was actually delivered to its first customer one month earlier than the UNIVAC I.

Though the United States Census Bureau owned UNIVAC I, serial 1, the Eckert-Mauchly division of Remington Rand retained it in Philadelphia for sales demonstration purposes, and did not actually install it at government offices until twenty-one months later.

In 1951 magnetic tape was used to record computer data on the  UNIVAC I with its UNISERVO tape drive. The UNISERVO was the first the tape drive for a commercially sold computer.

It's "recording medium was a thin metal strip of ½″ wide(12.7 mm) nickel-plated phosphor bronze. Recording density was 128 characters per inch (198 micrometre/character) on eight tracks at a linear speed of 100 in/s (2.54 m/s), yielding a data rate of 12,800 characters per second. Of the eight tracks, six were data, one was a parity track, and one was a clock, or timing track. Making allowance for the empty space between tape blocks, the actual transfer rate was around 7,200 characters per second. A small reel of mylar tape provided separation from the metal tape and the read/write head" (Wikipedia article on Univac I, accessed 04-26-2009).

In 1952 Grace Hopper wrote the first compiler (A-0) for UNIVAC, and on October 24, 1952 he UNIVAC Short Code II was developed. This was the earliest extant version of a high-level programming language actually intended to be used on an electronic digital computer.

On November 4, 1952 UNIVAC I, serial 5, used by the CBS television network in New York City, successfully predicted the election of Dwight D. Eisenhower as president of the United States. This was the first time that millions of people (including me, then aged 7) saw and heard about an electronic computer. The computer, far too large and delicate to be moved, was actually in Eckert-Mauchly's corporate office in Philadelphia. What was televised by Walter Cronkite from CBS studios in New York was a dummy terminal connected by teletype to the machine in Philadelphia.

Univac 1, serial 5 was later installed at Lawrence Livermore Laboratories in Livermore, California.

♦ In 2010 journalist Ira Chinoy completed a dissertation on journalists' early encounters with computers as tools for news reporting, focusing on election-night forecasting in 1952. The dissertation, which also explored methods journalists used to cover elections in the age of print, was entitled Battle of the Brains: Election-Night Forecasting at the Dawn of the Computer Age.

In 1954 UNIVAC I, serial 8, was installed at General Electric Appliance Park, Louisville, Kentucky. Serial 8 was the first stored-program electronic computer sold to a nongovernmental customer in the United States. It ran the "first successful industrial payroll application."

This humorous promotional film for the Remington Rand UNIVAC computer features J. Presper Eckert and John Mauchly in leading roles. Produced in 1960, the film outlines the earlier history of computing leading to the development and application of the UNIVAC.

(This entry was last revised on 12-31-2014.)
View Map + Bookmark Entry

Norbert Wiener Issues "Cybernetics", the First Widely Distributed Book on Electronic Computing 1948

"Use the word 'cybernetics', Norbert, because nobody knows what it means. This will always put you at an advantage in arguments."

— Widely quoted: attributed to Claude Shannon in a letter to Norbert Wiener in the 1940s.

 In 1948 mathematician Norbert Wiener at MIT published Cybernetics or Control and Communication in the Animal and the Machine, a widely circulated and influential book that applied theories of information and communication to both biological systems and machines. Computer-related words with the “cyber” prefix, including "cyberspace," originate from Wiener’s book. Cybernetics was also the first conventionally published book to discuss electronic digital computing. Writing as a mathematician rather than an engineer, Wiener’s discussion was theoretical rather than specific. Strangely the first edition of the book was published in English in Paris at the press of Hermann et Cie. The first American edition was printed offset from the French sheets and issued by John Wiley in New York, also in 1948. I have never seen an edition printed or published in England. 

Independently of Claude Shannon, Wiener conceived of communications engineering as a brand of statistical physics and applied this viewpoint to the concept of information. Wiener's chapter on "Time series, information, and communication" contained the first publication of Wiener's formula describing the probability density of continuous information. This was remarkably close to Shannon's formula dealing with discrete time published in A Mathematical Theory of Communication (1948). Cybernetics also contained a chapter on "Computing machines and the nervous system." This was a theoretical discussion, influenced by McCulloch and Pitts, of differences and similarities between information processing in the electronic computer and the human brain. It contained a discussion of the difference between human memory and the different computer memories then available. Tacked on at the end of Cybernetics were speculations by Wiener about building a chess-playing computer, predating Shannon's first paper on the topic.

Cybernetics is a peculiar, rambling blend of popular and highly technical writing, ranging from history to philosophy, to mathematics, to information and communication theory, to computer science, and to biology. Reflecting the amazingly wide range of the author's interests, it represented an interdisciplinary approach to information systems both in biology and machines. It influenced a generation of scientists working in a wide range of disciplines. In it were the roots of various elements of computer science, which by the mid-1950s had broken off from cybernetics to form their own specialties. Among these separate disciplines were information theory, computer learning, and artificial intelligence.

It is probable that Wiley had Hermann et Cie supervise the typesetting because they specialized in books on mathematics.  Hermann printed the first edition by letterpress; the American edition was printed offset from the French sheets. Perhaps because the typesetting was done in France Wiener did not have the opportunity to read proofs carefully, as the first edition contained many typographical errors which were repeated in the American edition, and which remained uncorrected through the various printings of the American edition until a second edition was finally published by John Wiley and MIT Press in 1961. 

Though the book contained a lot of technical mathematics, and was not written for a popular audience, the first American edition went through at least 5 printings during 1948,  and several later printings, most of which were probably not read in their entirety by purchasers. Sales of Wiener's book were helped by reviews in wide circulation journals such as the review in TIME Magazine on December 27, 1948, entitled "In Man's Image." The reviewer used the word calculator to describe the machines; at this time the word computer was reserved for humans.

"Some modern calculators 'remember' by means of electrical impulses circulating for long periods around closed circuits. One kind of human memory is believed to depend on a similar system: groups of neurons connected in rings. The memory impulses go round & round and are called upon when needed. Some calculators use 'scanning' as in television. So does the brain. In place of the beam of electrons which scans a television tube, many physiologists believe, the brain has 'alpha waves': electrical surges, ten per second, which question the circulating memories.

"By copying the human brain, says Professor Wiener, man is learning how to build better calculating machines. And the more he learns about calculators, the better he understands the brain. The cyberneticists are like explorers pushing into a new country and finding that nature, by constructing the human brain, pioneered there before them.

"Psychotic Calculators. If calculators are like human brains, do they ever go insane? Indeed they do, says Professor Wiener. Certain forms of insanity in the brain are believed to be caused by circulating memories which have got out of hand. Memory impulses (of worry or fear) go round & round, refusing to be suppressed. They invade other neuron circuits and eventually occupy so much nerve tissue that the brain, absorbed in its worry, can think of nothing else.

"The more complicated calculating machines, says Professor Wiener, do this too. An electrical impulse, instead of going to its proper destination and quieting down dutifully, starts circulating lawlessly. It invades distant parts of the mechanism and sets the whole mass of electronic neurons moving in wild oscillations" (http://www.time.com/time/magazine/article/0,9171,886484-2,00.html, accessed 03-05-2009).

Presumably the commercial success of Cybernetics encouraged Wiley to publish Berkeley's Giant Brains, or Machines that Think in 1949.

♦ In October 2012 I offered for sale the copy of the first American printing of Cybernetics that Wiener inscribed to Jerry Wiesner, the head of the laboratory at MIT where Wiener conducted his research. This was the first inscribed copy of the first edition (either the French or American first) that I had ever seen on the market, though the occasional signed copy of the American edition did turn up. Having read our catalogue description of that item, my colleague Arthur Freeman emailed me this story pertinent to Wiener's habit of not inscribing books:

"Norbert, whom I grew up nearby (he visited our converted barn in Belmont, Mass., constantly to play frantic theoretical blackboard math with my father, an economist/statistician at MIT, which my mother, herself a bit better at pure math, would have to explain to him later), was a notorious cheapskate. His wife once persuaded him to invite some colleagues out for a beer at the Oxford Grill in Harvard Square, which he did, and after a fifteen-minute sipping session, he got up to go, and solemnly collected one dime each from each of his guests. So when *Cybernetics* appeared on the shelves of the Harvard Coop Bookstore, my father was surprised and flattered that Norbert wanted him to have an inscribed copy, and together they went to Coop, where Norbert duly picked one out, wrote in it, and carried it to the check-out counter--where he ceremoniously handed it over to my father to pay for. This was a great topic of family folklore. I wonder if Jerry Wiesner paid for his copy too?"

View Map + Bookmark Entry

IBM's SSEC, the First Computer that Can Modify a Stored Program January 1948

In January 1948 IBM announced its first large-scale digital calculating machine, the Selective Sequence Electronic Calculator (SSEC). The SSEC was the first computer that could modify a stored program. It featured 12,000 vacuum tubes and 21,000 electromechanical relays.

“IBM's Selective Sequence Electronic Calculator (SSEC), built at IBM's Endicott facility under the direction of Columbia Professor Wallace Eckert and his Watson Scientific Computing Laboratory staff in 1946-47, . . . was moved to the new IBM Headquarters Building at 590 Madison Avenue in Manhattan, where it occupied the periphery of a room 60 feet long and 30 feet wide. . . . [Estimates of the] dimensions of its "U" shape [were] at 60 + 40 + 80 feet, 180 feet in all, (about half a football field!)”

 "Designed, built, and placed in operation in only two years, the SSEC contained 21,400 relays and 12,500 vacuum tubes. It could operate indefinitely under control of its modifiable program. On the average, it performed 14-by-14 decimal multiplication in one-fiftieth of a second, division in one-thirtieth of a second, and addition or subtraction on nineteen-digit numbers in one-thirty-five-hundredth of second... For more than four years, the SSEC fulfilled the wish Watson had expressed at its dedication: that it would serve humanity by solving important problems of science. It enabled Wallace Eckert to publish a lunar ephemeris ... of greater accuracy than previously available... the source of data used in man's first landing on the moon". "For each position of the moon, the operations required for calculating and checking results totaled 11,000 additions and subtractions, 9,000 multiplications, and 2,000 table look-ups. Each equation to be solved required the evaluation of about 1,600 terms — altogether an impressive amount of arithmetic which the SSEC could polish off in seven minutes for the benefit of the spectators" (http://www.columbia.edu/acis/history/ssec.html#sources, accessed 03-24-2010).

The SSEC remained sufficiently influential in the popular view of mainframes that it was the subject of a cartoon by Charles Addams published on the cover of The New Yorker magazine in February 11, 1961, in which the massive machine produced a Valentine's Day card for its elderly woman operator!

View Map + Bookmark Entry

Edmund Berkeley's "Giant Brains," the First Popular Book on Electronic Computers 1949

In 1949 mathematician and actuary Edmund Berkeley issued Giant Brains or Machines that Think, the first popular book on electronic computers, published years before the public heard much about the machines. The work was published by John Wiley & Sons who were enjoying surprising commercial success with Norbert Wiener's much more technical book, Cybernetics.

Among many interesting details, Giant Brains contained a discussion about a machine called Simon, which has been called the first personal computer. 

View Map + Bookmark Entry

1950 – 1960

"Can Man Build a Superman?" January 23, 1950

The cover by Boris Artzybasheff on the January 23, 1950 issue of TIME Magazine depicted the Harvard Mark III partly electronic and partly electromechanical computer as a Naval officer in Artzybasheff's "bizarrely anthropomorphic" style. The caption under the image read, "Mark III. Can Man Build a Superman?" The cover story of the magazine was entitled "The Thinking Machine."

The Mark III, delivered to U.S. Naval Proving Ground at the US Navy base at Dahlgren, Virginia in March 1950, operated at 250 times the speed of the Harvard Mark I (1944). 

Among its interesting elements,  the Time article included an early use of the word computer for machines rather than people. The review of Wiener's Cybernetics published in TIME in December 1948, referred to the machines as calculators.

"What Is Thinking? Do computers think? Some experts say yes, some say no. Both sides are vehement; but all agree that the answer to the question depends on what you mean by thinking.

"The human brain, some computermen explain, thinks by judging present information in the light of past experience. That is roughly what the machines do. They consider figures fed into them (just as information is fed to the human brain by the senses), and measure the figures against information that is "remembered." The machine-radicals ask: 'Isn't this thinking?'

"Their opponents retort that computers are mere tools that do only what they are told. Professor [Howard] Aiken, a leader of the conservatives, admits that the machines show, in rudimentary form at least, all the attributes of human thinking except one: imagination. Aiken cannot define imagination, but he is sure that it exists and that no machine, however clever, is likely to have any."

"Nearly all the computermen are worried about the effect the machines will have on society. But most of them are not so pessimistic as [Norbert] Wiener. Professor Aiken thinks that computers will take over intellectual drudgery as power-driven tools took over spading and reaping. Already the telephone people are installing machines of the computer type that watch the operations of dial exchanges and tot up the bills of subscribers.

"Psychotic Robots. In the larger, "biological" sense, there is room for nervous speculation. Some philosophical worriers suggest that the computers, growing superhumanly intelligent in more & more ways, will develop wills, desires and unpleasant foibles' of their own, as did the famous robots in Capek's R.U.R.

"Professor Wiener says that some computers are already "human" enough to suffer from typical psychiatric troubles. Unruly memories, he says, sometimes spread through a machine as fears and fixations spread through a psychotic human brain. Such psychoses may be cured, says Wiener, by rest (shutting down the machine), by electric shock treatment (increasing the voltage in the tubes), or by lobotomy (disconnecting part of the machine).

"Some practical computermen scoff at such picturesque talk, but others recall odd behavior in their own machines. Robert Seeber of I.B.M. says that his big computer has a very human foible: it hates to wake up in the morning. The operators turn it on, the tubes light up and reach a proper temperature, but the machine is not really awake. A problem sent through its sleepy wits does not get far. Red lights flash, indicating that the machine has made an error. The patient operators try the problem again. This time the machine thinks a little more clearly. At last, after several tries, it is fully awake and willing to think straight.

"Neurotic Exchange. Bell Laboratories' Dr. [Claude] Shannon has a similar story. During World War II, he says, one of the Manhattan dial exchanges (very similar to computers) was overloaded with work. It began to behave queerly, acting with an irrationality that disturbed the company. Flocks of engineers, sent to treat the patient, could find nothing organically wrong. After the war was over, the work load decreased. The ailing exchange recovered and is now entirely normal. Its trouble had been 'functional': like other hard-driven war workers, it had suffered a nervous breakdown" (quotations from http://www.time.com/time/magazine/article/0,9171,858601-7,00.html, accessed 03-05-2009).

View Map + Bookmark Entry

Simon, the First Personal Computer May – November 1950

Edmund Berkeley's "Simon," which has been called the first personal computer, developed out of his book, Giant Brains, or Machines That Think, published in November 1949, in which he wrote,

 “We shall now consider how we can design a very simple machine that will think.. Let us call it Simon, because of its predecessor, Simple Simon... Simon is so simple and so small in fact that it could be built to fill up less space than a grocery-store box; about four cubic feet. . . . It may seem that a simple model of a mechanical brain like Simon is of no great practical use. On the contrary, Simon has the same use in instruction as a set of simple chemical experiments has: to stimulate thinking and understanding, and to produce training and skill. A training course on mechanical brains could very well include the construction of a simple model mechanical brain, as an exercise."

One year later in an article published in Scientific American about “Simon,” in November 1950 Berkeley predicted that “some day we may even have small computers in our homes, drawing energy from electric power lines like refrigerators or radios.”

"Who built "Simon"? The machine represents the combined efforts of a skilled mechanic, William A. Porter, of West Medford, Mass., and two Columbia University graduate students of electrical engineering, Robert A. Jensen . . . and Andrew Vall . . . . Porter did the basic construction, while Jensen and Vall took the machine when it was still not in working order and engineered it so that it functioned. Specifically, they designed a switching system that made possible the follow-through of a given problem; set up an automatic synchronizing system; installed a system for indicated errors due to loss of synchronization; re-designed completely the power supply of themachine" (Fact Sheet on "Simon." Public Information Office, Columbia University, May 18, 1950).

"The Simon's architecture was based on relays. The programs were run from a standard paper tape with five rows of holes for data. The registers and ALU could store only 2 bit. The data entry was made through the punched paper or by five keys on the front panel of the machine. The output was provided by five lamps. The punched tape served not only for data entry, but also as a memory for the machine. The instructions were carried out in sequence, as they were read from the tape. The machine was able to perform four operations: addition, negation, greater than, and selection" (Wikipedia article on Simon (computer) accessed 10-10-2011).

In his 1956 article, "Small Robots-Report," Berkeley stated that he had spent $4000 developing Simon. The single machine that was constructed is preserved at the Computer History Museum, Mountain View, California. Berkeley also marketed engineering plans for Simon, of which 400 copies were sold.

View Map + Bookmark Entry

The First OCR System: "GISMO" 1951

In 1951 American inventor David Hammond Shepard, a cryptanalyst at AFSA, the forerunner of the U.S. National Security Agency (NSA), built "Gismo" in his spare time.

Gismo was a machine to convert printed messages into machine language for processing by computer— the first optical character recognition (OCR) system.

"IBM licensed the [OCR] machine, but never put it into production. Shepard designed the Farrington B numeric font now used on most credit cards. Recognition was more reliable on a simple and open font, to avoid the effects of smearing at gasoline station pumps. Reading credit cards was the first major industry use of OCR, although today the information is read magnetically from the back of the cards.

"In 1962 Shepard founded Cognitronics Corporation. In 1964 his patented 'Conversation Machine' was the first to provide telephone Interactive voice response access to computer stored data using speech recognition. The first words recognized were 'yes' and 'no' " (Wikipedia article on David H. Shepard, accessed 02-29-2012).

View Map + Bookmark Entry

The First Journal on Electronic Computing October 1952

In October 1952 Edmund Berkeley began publication of Computing Machinery Field, the first journal on electronic computing, and the ancestor of all commercially published periodical publications on computing. The first three quarterly issues were mimeographed. By the March 1953 issue the title was changed to Computers and Automation.

View Map + Bookmark Entry

The First Widely Read English Book on Electronic Computing 1953 – 1968

In 1953 English scientist and educationist Bertram V. Bowden, who for a time worked as a computer salesman for Ferranti Limited, and was later made a life peer as Baron Bowden, edited Faster than Thought, the first widely read English book on electronic digital computing.

Reflective of the slow speed of advances in computing at this time, the book remained in print without change until 1968.

View Map + Bookmark Entry

The First Report on the Application of Electronic Computers to Business June 1953

In June 1953 Richard W. Appel and other students at Harvard Business school issued Electronic Business Mchines: A New Tool for Management.

This was the first report on the application of electronic computers to business. The report was issued before any electronic computer was delivered to an American corporation. (See Reading 10.4.)

View Map + Bookmark Entry

Coining the Phrase "Social Network" 1954

In 1954 Australian sociologist John A. Barnes coined the phrase, "Social Network" in "Class and Committees in a Norwegian Island Parish," Human Relations VII (1954) 39-58, in which he presented the result of nearly two years of fieldwork in Bremnes on Bømlo Island, Norway.

View Map + Bookmark Entry

The Movie "Desk Set", Satirizing the Role of Automation in Eliminating Jobs, and Librarians 1957

The romantic comedy film, Desk Set, brought to the silver screen in 1957, was the first film to dramatize and satirize the role of automation in eliminating traditional jobs. The name of the computer in the film, EMERAC, and its room-size installation, was an obvious take-off on UNIVAC, the best-known computer at the time. In the film, the computer was brought-in to replace the library of books, and its staff—an early foreshadowing of the physical information versus digital information issue.  Directed by Walter Lang and starring Spencer Tracy, Katharine Hepburn, Gig Young, Joan Blondell, and Dina Merrill, the screenplay was written by Phoebe Ephron and Henry Ephron from the play by William Marchant.

The film "takes place at the "Federal Broadcasting Network" (exterior shots are of Rockefeller Center, in New York City, headquarters of NBC). Bunny Watson (Katharine Hepburn) is in charge of its reference library, which is responsible for researching and answering questions on all manner of topics, such as the names of Santa's reindeer. She has been involved for seven years with network executive Mike Cutler (Gig Young), with no marriage in sight.

"The network is negotiating a merger with another company, but is keeping it secret. To help the employees cope with the extra work that will result, the network head has ordered two computers (called "electronic brains" in the film). Richard Sumner (Spencer Tracy), the inventor of EMERAC and an efficiency expert, is brought in to see how the library functions, to figure out how to ease the transition. Though extremely bright, as he gets to know Bunny, he is surprised to discover that she is every bit his match.

"When they find out the computers are coming, the employees jump to the conclusion the machines are going to replace them. Their fears seem to be confirmed when everyone on the staff receives a pink slip printed out by the new payroll computer. Fortunately, it turns out to be a mistake; the machine fired everybody in the company, including the president" Wikipedia article on Desk Set, accessed 12-23-2008).

View Map + Bookmark Entry

There are Forty Computers on American University Campuses 1957

". . . in 1957 there were only 40 computers on unversity campuses across the country [the United States]" (Bowles (ed.), Computers in Humanistic Research [1967] v).

View Map + Bookmark Entry

Merle Curti's "The Making of an American Community": the First "Large Scale" Application of Humanities Computing in the U. S. 1959

The first "large scale" use of machine methods in humanities computing in the United States was Merle Curti's study of Trempealeau County, WisconsinThe making of an American Community: A Case Study of Democracy in a Frontier County (1959).

"Confronted with census material for the years 1850 through 1880–actually several censuses covering population, agriculture, and manufacturing–together with a population of over 17,000 persons by the latter date, Curti turned to punched cards and unit record equipment for the collection and analysis of his data. By this means a total of 38 separate items of information on each individual were recorded for subsequent manifpulation. Quite obviously, the comprehensive nature of this study was due in part to the employment of data processing techniques" (Bowles [ed.] Computers in Humanistic Research (1967) 57-58).

View Map + Bookmark Entry

The First Computer Computer Matching Dating Service 1959

In 1959 Philip A. Fialer and James Harvey, students in Professor Jack Herriot’s computer course, "Math 139, Theory and Operation of Computing Machines," at Stanford University, devised the "Happy Families Planning Service" as a final math class project, pairing up 49 men and 49 women. For the project Fialer and Harvey had limited access to Stanford's newly acquired IBM 650 computer.

"The notion of melding Math 139 with the Great- Date-Matching Party surfaced early in the quarter when Fialer and Harvey needed to come up with a term project. For some time, Fialer and Harvey had hosted parties in houses that they rented with several electrical engineering and KZSU buddies at 1203 and 1215 Los Trancos Woods Road in Portola Valley. Student nurses from the Veterans Administration psychiatric hospital on Willow Road in Menlo Park were often invited. The boys represented themselves to the nurses as the “SRI Junior Engineers Social Club”—which was at least partially true, since one Los Trancos housemate worked summers and part-time as a junior engineer at Stanford Research Institute (SRI). KZSU radio station parties also were held in Los Trancos, featuring the KZSU musical band marching around the Los Trancos circle loop road at midnight. (This somewhat impromptu band was the basis for the current Los Trancos Woods Community Marching Band, officially organized at a KZSU party on New Year’s Eve in 1960.)  

'Fialer and Harvey figured a KZSU-Los Trancos type party could emerge as a positive by-product of Math 139, using the computer to match “a given number of items of one class to the same number of items of another class.” The classes would be male and female subjects, and the population would be Stanford students, with a few miscellaneous Los Trancos Woods residents thrown in.  

"The pair wrote a program to measure the differences in respondents’ answers to a questionnaire. A “difference” score was then computed for each possible male-female pair.  

"The program compared one member of a “class”—one man—with all members of the other class—women—and then repeated this for all members of the first class. The couple—a member from each class—with the lowest difference score was then matched, and the process repeated for the remaining members of each class. Thus, the first couple selected was the “best” match. As fewer couples remained in the pool, the matched couples had larger and larger difference scores.

"Given the limitations of computer time available and the requirements of the course, Fialer and Harvey did not use a “best-fit” algorithm, so the last remaining pairs were indeed truly “odd” couples. Two of the women in the sample, not Stanford students, were single mothers with two or three children. One of them, age 30, ended up paired with a frosh member of the Stanford Marching Band" (Computers in Love: Stanford and the First Trials of Computer Date Matching by C. Stewart Gillmor http://www.mgb67.com/computersinlove.htm, accessed 02-14-2013).

On February 13, 2013 The New York Times published a video interview with Fialer and Harvey regarding their early experiment in computer dating. In the interview they called the project the "Marriage Planning Service." The video showed pages from the program they wrote for the matching process, as apparently their complete file for the project was preserved.

View Map + Bookmark Entry

1960 – 1970

6000 Computers are Operational in the U.S., Out of 10,000 Worldwide 1960

In 1960 about six thousand computers were operational in the United States, and perhaps ten thousand were operational worldwide.

View Map + Bookmark Entry

PLATO 1: The First Electronic Learning System 1960

In 1960 PLATO I (Programmed Logic for Automatic Teaching Operations), the first electronic learning system, developed by Donald Bitzer, operated on the ILLIAC 1 at the University of Illinois at Urbana-Champaign. Plato I included a television for a display, and a special system to navigate the system's menu. It serviced a single user. In 1961 PLATO II allowed two students to operate the system at one time.

View Map + Bookmark Entry

Licklider Describes "Man-Computer Symbiosis" March 1960

In March 1960 computer scientist J. C. R. Licklider of Bolt, Baranek and Newman published "Man-Computer Symbiosis," IRE Transactions on Human Factors in Electronics, volume HFE-1 (March 1960) 4-11, postulating that the computer should become an intimate symbiotic partner in human activity, including communication. (See Reading 10.5.)

View Map + Bookmark Entry

The QUOTRON Computerized Stock-Quotation System Is Introduced 1961

In 1961 QUOTRON, a computerized stock-quotation system using a Control Data Corporation computer, was introduced.

Quotron became popular with stockbrokers, signaling the end of traditional ticker tape.

View Map + Bookmark Entry

George Forsythe Coins the Term "Computer Science" 1961

In 1961 mathematician and founder of Stanford University's Computer Science department George E. Forsythe coined the term "computer science" in his paper "Engineering Students Must Learn both Computing and Mathematics", J. Eng. Educ. 52 (1961) 177-188, quotation from p. 177.

Of this Donald Knuth wrote, "In 1961 we find him using the term 'computer science' for the first time in his writing:

[Computers] are developing so rapidly that even computer scientists cannot keep up with them. It must be bewildering to most mathematicians and engineers...In spite of the diversity of the applications, the methods of attacking the difficult problems with computers show a great unity, and the name of Computer Sciences is being attached to the discipline as it emerges. It must be understood, however, that this is still a young field whose structure is still nebulous. The student will find a great many more problems than answers. 

"He [Forsythe] identified the "computer sciences" as the theory of programming, numerical analysis, data processing, and the design of computer systems, and observed that the latter three were better understood than the theory of programming, and more available in courses" (Knuth, "George Forsythe and the Development of Computer Science," Communications of the ACM, 15 (1972) 722).

View Map + Bookmark Entry

ICPSR, The Largest Archive of Digital Social Science Data, is Founded at the University of Michigan 1962

In 1962 ICPSR, the Inter-university Consortium for Political and Social Research, was founded at the University of Michigan, Ann Arbor. ICPSR became the world's largest archive of digital social science data,  acquiring, preserving, and distributing original research data, and providing training in its analysis.

View Map + Bookmark Entry

Fritz Machlup Introduces the Concept of "The Information Economy" 1962

In 1962 Austrian-American economist Fritz Machlup of Princeton published The Production and Distribution of Knowledge in the United States.

In this book Machlup introduced the concept of the knowledge industry.

"He distinguished five sectors of the knowledge sector: education, research and development, mass media, information technologies, information services. Based on this categorization he calculated that in 1959 29% per cent of the GNP in the USA had been produced in knowledge industries" (Wikipedia article on Information Society, accessed 04-25-2011).

View Map + Bookmark Entry

Douglas Engelbart Issues "Augmenting Human Intellect: A Conceptual Framework" October 1962

In October 1962 Douglas Engelbart of the Stanford Research Institute, Menlo Park, California, completed his report, Augmenting Human Intellect: A Conceptual Framework, for the Director of Information Sciences, Air Force Office of Scientific Research. This report led J. C. R. Licklider of DARPA to fund SRI's Augmentation Research Center.

View Map + Bookmark Entry

Licklider at the Information Processing Techniques Office, Begins Funding Research that Leads to the ARPANET October 1, 1962

On October 1, 1962 J.C. R. Licklider was appointed Director of The Pentagon’s Information Processing Techniques Office (IPTO), a division of ARPA (the Advanced Research Projects Agency).

Licklider's  initial budget was $10,000,000 per year. Licklider eventually initiated the sequence of events leading to ARPANET.

View Map + Bookmark Entry

First Use of the Term "Hacker" in the Context of Computing November 20, 1963

On November 20, 1963 the first use of the term "hacker" in the context of computing appeared in the MIT student newspaper, The Tech:

"Many telephone services have been curtailed because of so-called hackers, according to Prof. Carlton Tucker, administrator of the Institute phone system. . . .The hackers have accomplished such things as tying up all the tie-lines between Harvard and MIT, or making long-distance calls by charging them to a local radar installation. One method involved connecting the PDP-1 computer to the phone system to search the lines until a dial tone, indicating an outside line, was found. . . . Because of the 'hacking,' the majority of the MIT phones are 'trapped.' "

View Map + Bookmark Entry

The First Online Reservation System 1964

SABRE (Semi-Automatic Business-Related Environment), an online airline reservation system developed by American Airlines and IBM, and based on two IBM mainframes in Briarcliff Manor, New York, became operational in 1964. SABRE worked over telephone lines in “real time” to handle seat inventory and passenger records from terminals in more than 50 cities.

View Map + Bookmark Entry

Social Security Numbers as Identifiers 1964

In 1964 the Internal Revenue Service (IRS) began using social security numbers as tax ID numbers.

View Map + Bookmark Entry

Thomas Kurtz & John Kemeny Invent BASIC 1964

In 1964 nearly all computer use required writing custom software, which was something only scientists and mathematicians tended to do. To make programming accessible to a wider range of people, Dartmouth College mathematicians and computer scientists Thomas E. Kurtz and  John G. Kemeny invented BASIC (Beginner’s All-Purpose Symbolic Instruction Code), a general-purpose, high-level programming language designed for ease of use.

Kurtz and Kemeny designed BASIC to allow students to write mainframe computer programs for the Dartmouth Time-Sharing System, the first large-scale time-sharing system to be implemented successfully, which was operational on May 1, 1964. Developed by Darthmouth students under Kurtz and Kemeny's supervision, BASIC was intended specifically for less technical users who did not have or want the mathematical background previously expected. 

In the mid 1970s and 1980s versions of BASIC became widespread on microcomputers. Microcomputers usually shipped with BASIC, often in the machine's firmware. Having an easy-to-learn language on these early personal computers allowed small business owners, professionals, hobbyists, and consultants to develop custom software on computers they could afford.

View Map + Bookmark Entry

Bertram Gross Coins the Term "Information Overload" 1964

In 1964 American social scientist Bertram Myron Gross coined the expression "information overload" in his book, The Managing of Organizations: the Administrative Struggle.

View Map + Bookmark Entry

Honeywell Produces an Early Home Computer? 1965

In 1965 Honeywell attempted to open the home computer market with its Kitchen Computer. The H316 was the first under-$10,000 16-bit machine from a major computer manufacturer. It was the smallest addition to the Honeywell "Series 16" line, and was available in three versions: table-top, rack-mountable, and self-standing pedestal. The pedestal version, complete with cutting board, was marketed by Neiman Marcus as "The Kitchen Computer.” It came with some built-in recipes, two weeks' worth of programming, a cook book, and an apron.

There is no evidence that any examples were sold.

View Map + Bookmark Entry

Tom Van Vleck & Noel Morris Write One of the First Email Programs 1965

Though its exact history is murky, email (e-mail) began as a way for users on time-sharing mainframe computers to communicate. Among the first systems to have an email facility were System Development Corporation of Santa Monica's programming for the AN/FSQ-32  (Q32) built by IBM for the United States Air Force Strategic Air Command (SAC), and MIT's Compatible Time-Sharing System (CTSS). The authors of the first email program for CTSS were American software engineer Tom Van Vleck and American computer scientist Noel Morris. The two men created the program in the summer of 1965.

"A proposed CTSS MAIL command was described in an undated Programming Staff Note 39 by Louis Pouzin, Glenda Schroeder, and Pat Crisman. Numerical sequence places the note in either Dec 64 or Jan 65. PSN 39 proposed a facility that would allow any CTSS user to send a message to any other. The proposed uses were communication from "the system" to users informing them that files had been backed up, and communication to the authors of commands with criticisms, and communication from command authors to the CTSS manual editor.

"I was a new member of the MIT programming staff in spring 1965. When I read the PSN document about the proposed CTSS MAIL command, I asked "where is it?" and was told there was nobody available to write it. My colleague Noel Morris and I wrote a version of MAIL for CTSS in the summer of 1965. Noel was the one who saw how to use the features of the new CTSS file system to send the messages, and I wrote the actual code that interfaced with the user. The CTSS manual writeup and the source code of MAIL are available online. (We made a few changes from the proposal during the course of implementation: e.g. to read one's mail, users just used the PRINT command instead of a special argument to MAIL.)  

"The idea of sending "letters' using CTSS was resisted by management, as a waste of resources. However, CTSS Operations did need a faclility to inform users when a request to retrieve a file from tape had been completed, and we proposed MAIL as a solution for this need. (Users who had lost a file due to system or user error, or had it deleted for inactivity, had to submit a request form to Operations, who ran the RETRIEVE program to reload them from tape.) Since the blue 7094 installation in Building 26 had no CTSS terminal available for the operators, one proposal for sending such messages was to invoke MAIL from the 7094 console switches, inputting a code followed by the problem number and programmer number in BCD. I argued that this was much too complex and error prone, and that a facility that let any user send arbitrary messages to any other would have more general uses, which we would discover after it was implemented" (http://www.multicians.org/thvv/mail-history.html, accessed 06-20-2011).

♦ On June 19, 2011 writer and filmmaker Errol Morris published a series of five illustrated articles in The New York Times concerning the roles of his brother Noel and Tom Van Vleck in the invention of email. The first of these was entitled "Did My Brother Invent E-Mail with Tom Van Vleck? (Part One)". The articles, in an usual dialog form, captured some of the experience of programming time-sharing mainframes, and what it was like to send and receive emails at this early date.

View Map + Bookmark Entry

U.S. Senate Hearings on the Invasion of Privacy by Computers 1965

In 1965 hearings were held by the House of Representatives Special Subcommittee on Invasion of Privacy by computers.

View Map + Bookmark Entry

Irving John Good Originates the Concept of the Technological Singularity 1965

In 1965 British mathematician Irving John Good, originally named Isidore Jacob Gudak, published "Speculations Concerning the First Ultraintelligent Machine," Advances in Computers, vol. 6 (1965) 31ff. This paper, published while Good held research positions at Trinity College, Oxford and at Atlas Computer Laboratory, originated the concept later known as "technological singularity," which anticipates the eventual existence of superhuman intelligence:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." 

Stanley Kubrick consulted Good regarding aspects of computing and artificial intelligence when filming 2001: A Space Odyssey (1968), one of whose principal characters was the paranoid HAL 9000 supercomputer.

View Map + Bookmark Entry

Gordon Moore Promulgates "Moore's Law" April 19, 1965

On April 19, 1965, while Director of the Research and Development Laboratory at Fairchild Semiconductor in Palo Alto, California, physical chemist Gordon Moore published "Cramming More Components onto Integrated Circuits" in Electronics Magazine. In this article Moore observed that the number of transistors that could be placed inexpensively on an integrated circuit doubled approximately every two years, and predicted that this trend would continue. In 1970, after Moore had left Fairchild Semiconductor to co-found Intel Corporation, the press called this observation “Moore’s Law.”

"The term "Moore's law" was coined around 1970 by the Caltech professor, VLSI pioneer, and entrepreneur Carver Mead. Predictions of similar increases in computer power had existed years prior. Alan Turing in his 1950 paper "Computing Machinery and Intelligence" had predicted that by the turn of the millennium, we would have "computers with a storage capacity of about 10^9", what today we would call "128 megabytes." Moore may have heard Douglas Engelbart, a co-inventor of today's mechanical computer mouse, discuss the projected downscaling of integrated circuit size in a 1960 lecture. A New York Times article published August 31, 2009, credits Engelbart as having made the prediction in 1959. . . .

"Moore slightly altered the formulation of the law over time, in retrospect bolstering the perceived accuracy of his law. Most notably, in 1975, Moore altered his projection to a doubling every two years. Despite popular misconception, he is adamant that he did not predict a doubling "every 18 months". However, David House, an Intel colleague, had factored in the increasing performance of transistors to conclude that integrated circuits would double in performance every 18 months." (Wikipedia article on Moore' Law, accessed 11-19-2011).

View Map + Bookmark Entry

Walter Allner Designs the First Magazine Cover Using Computer Graphics July 1965

Detail of cover of the July 1965 issue of Fortune.  Please click to see entire image.

The color cover of the July 1965 issue of Fortune magazine was the first magazine cover designed using computer graphics, though the editor and designer made not have been aware of that at the time. The cover reproduced a photograph of graphics displayed on a computer screen. Two color filters made the computer image appear in color.  On p. 2 of the issue the magazine explained their cover as follows:

"This cover is the first in Fortune's thirty-five years to have been executed wholly by machine— a PDP-1 computer manufactured by Digital Equipment Corp., and loaned to Fortune by Bolt Beranek & Newman Inc. of Cambridge, Massachusetts. The myriad arrows photographed in upward flight across the machine's oscilloscope symbolize the predominant direction of corporate statistics in 1964, while the large, glowing numeral [500] represents the number of companies catalogued in the Directory of the 500 Largest Industrial Corporations. . . ."

On p. 97 editor Duncan Norton-Taylor devoted his monthly column to the cover, writing:

"In the course of events, Fortune's art director, Walter Allner, might have frowned on filling the column at left with an array of abbreviations and figures, for Allner is no man to waste space on uninspired graphics. But these figures are his special brain children. They are the instructions that told a PDP-1 computer how to generate the design on this month's cover. This program was 'written' to Allner's specifications and punched into an eight-channel paper tape by Sanford Libman and John Price, whose interest in art and electronics developed at M.I.T.

"Generating the design on an oscilloscope and photographing required about three hours of computer time and occupied Price, Allner, and Libman until four one morning. Multiple exposure through two filters added color to the electron tube's glow. . . . 

"Walter Allner was born in Dessau, Germany. He studied at the Bauhaus-Dessau under Josef Albers, Vasily Kandinsky, and Paul Klee. . . . 

"Allner confesses to certain misgivings about teaching the PDP-1 computer too much about Fortune cover design, but adds, philosophically: 'If the computer puts art directors out of work, I'll at least have had some on-the-job training as a design-machine programer [sic]."

Herzogenrath & Nierhoff-Wielk, Ex Machina—Frühe Computergrafik bis 1979. Ex Machina—Early Computer Graphics up to 1978 (2007) 243.

View Map + Bookmark Entry

The Amateur Computer Society, Possibly the First Personal Computer Club, is Founded 1966

IN 1966 Stephen B. Gray, computers editor for Electronics magazine, founded The Amateur Computer Society, possibly the first personal computer club.

View Map + Bookmark Entry

Douglas Parkhill Issues a Predictive Discussion of the Features of Cloud Computing 1966

In 1966 Canadian technologist Douglas Parkhill issued a book entitled The Challenge of the Computer Utility. In this work Parkhill predicted and explored features of cloud computing that became widely established by the second decade of the twenty-first century. These features included elastic provision, provision as a utility, online, illusion of infinite supply, the comparison to the electricity industry and the use of public, private, government, and community forms.

View Map + Bookmark Entry

Jack Kilby and Texas Instruments Invent the First Hand-Held Electronic Calculator 1967 – June 25, 1974

In 1967 Texas Instruments filed the patent for the first hand-held electronic calculator, invented by Jack S. Kilby, Jerry Merryman, and Jim Van Tassel. The patent (Number 3,819,921) was awarded on June 25, 1974. This miniature calculator employed a large-scale integrated semiconductor array containing the equivalent of thousands of discrete semiconductor devices.

View Map + Bookmark Entry

Edmund Bowles Issues The First Anthology of Research on Humanities Computing 1967

In 1967 musicologist Edmund A. Bowles, in his capacity as manager of Professional Activities in the Department of University Relations at IBM, edited Computers in Humanistic Research. Readings and Perspectives. This was the first anthology of research on humanities computing.

View Map + Bookmark Entry

35,000 Computers Are Operational in the United States 1967

In 1967 there were 35,000 computers operating in the United States.

Bowles (ed.) Computers in Humanistic Research (1967) v,

View Map + Bookmark Entry

U.S. Senate Hearings on Computer Privacy Occur March 1967

In 1967 the United States Senate held hearings on computer privacy.

View Map + Bookmark Entry

Protecting Security in a Networked Environment Circa May – September 1967

Between May and September 1967 the Department of Defense requested the Director of the Advanced Research Planning Agency (ARPA) to form a Task Force “to study and recommend hardware and software safeguards that would satisfactorily protect classified information in multi-access, resource-sharing computer systems.” The report was published in February 1970 as Security Controls for Computer Systems: Report of Defense Science Board Task Force on Computer Security - RAND Report R-609-1, edited by Willis H. Ware.

View Map + Bookmark Entry

Stanley Kubrick & Arthur C. Clarke Create "2001: A Space Odyssey" 1968

In 1968 the film 2001: A Space Odyssey, written by American film director Stanley Kubrick in collaboration with science fiction writer and futurist Arthur C. Clarke, captured imaginations with the idea of a computer that could see, speak, hear, and “think.” 

Perhaps the star of the film was the HAL 9000 computer. "HAL (Heuristically programmed ALgorithmic Computer) is an artificial intelligence, the sentient on-board computer of the spaceship Discovery. HAL is usually represented only as his television camera "eyes" that can be seen throughout the Discovery spaceship.... HAL is depicted as being capable not only of speech recognition, facial recognition, and natural language processing, but also lip reading, art appreciation, interpreting emotions, expressing emotions, reasoning, and chess, in addition to maintaining all systems on an interplanetary voyage.

"HAL is never visualized as a single entity. He is, however, portrayed with a soft voice and a conversational manner. This is in contrast to the human astronauts, who speak in terse monotone, as do all other actors in the film" (Wikipedia article on HAL 9000, accessed 05-24-2009).

"Kubrick and Clarke had met in New York City in 1964 to discuss the possibility of a collaborative film project. As the idea developed, it was decided that the story for the film was to be loosely based on Clarke's short story "The Sentinel", written in 1948 as an entry in a BBC short story competition. Originally, Clarke was going to write the screenplay for the film, but Kubrick suggested during one of their brainstorming meetings that before beginning on the actual script, they should let their imaginations soar free by writing a novel first, which the film would be based on upon its completion. 'This is more or less the way it worked out, though toward the end, novel and screenplay were being written simultaneously, with feedback in both directions. Thus I rewrote some sections after seeing the movie rushes -- a rather expensive method of literary creation, which few other authors can have enjoyed.' The novel ended up being published a few months after the release of the movie" (Wikipedia article on Arthur C. Clarke, accessed 05-24-2009).

View Map + Bookmark Entry

Helmut Gröttrup & Jürgen Dethloff Invent the "Smart Card" 1968 – 1984

In 1968 German electrical engineers Helmut Gröttrup of Stuttgart and Jürgen Dethloff, of Hamburg, invented the smart card (chip card, or integrated circuit card [ICC]) and applied for the patent. The patent for the smart card was finally granted to both inventors in 1982. The first wide use of the cards was for payment in French pay phones—France Telecom Télécarte—starting in 1983-84.

View Map + Bookmark Entry

The Computer Artis Society, the First Society for Computer Art, is Founded in London 1968

In the months following the ground breaking London exhibition, Cybernetic Serendipity, that showcased computer-based and technologically influenced works in graphics, music, film, and interactivity, Alan SutcliffeGeorge Mallen, and John Lansdown founded the Computer Arts Society in London. The Society enabled relatively isolated artists working with computers in a variety of fields to meet and exchange information. It also ran practical courses, conferences and exhibitions.

"In March 1969, CAS organised an exhibition entitled Event One, which was held at the Royal College of Art. The exhibition showcased innovative work with computers across a broad range of disciplines, including sculpture, graphics, music, film, architecture, poetry, theatre and dance. CAS founder John Lansdown, for example, designed and organised a dance performance that was choreographed entirely by the computer and performed by members of the Royal Ballet School. The multi-media approach of exhibitions such as Event One greatly influenced younger artists and designers emerging at this time. Many of these artists were rebelling against the traditional fine art hierarchies of the time, and went on to work in the new fields of computer, digital, and video art as a result.

"CAS established links with educational establishments, journalists and industry, ensuring greater coverage of their activities and more importantly helping to provide access to computing technology at a time when this was difficult. CAS members were remarkably ahead of their time in recognising the long term impact that the computer would have on society, and in providing services to those already working creatively with the computer. By 1970 CAS had 377 members in 17 countries. Its journal 'PAGE' was first edited by auto-destructive artist Gustav Metzger, and is still being produced today. The Computer Arts Society is a specialist group of the British Computer Society" (http://www.vam.ac.uk/content/articles/t/v-and-a-computer-art-collections/, accessed 01-19-2014).

In January 2014 all of the early issues of Page, beginning with "Page 1," April 1969 were available from the website of the Computer Arts Society Specialty Group of the BCS at this link.

In 2007 the Computer Arts Society donated its collection of original computer art to the Victoria and Albert Museum in London, which maintains one of the world's largest and most significant collections of computer art. The V&A's holdings in this field were the subject of an article by Honro Beddard entitled "Computer Art at the V&A," V&A Online Journal, Issue No. 2 (2009), accessed 01-19-2014). 

View Map + Bookmark Entry

Licklider & Taylor Describe Features of the Future ARPANET; Description of a Computerized Personal Assistant April 1968

In 1968 American psychologist and computer scientist J.C.R. Licklider of MIT and Robert W. Taylor, then director of ARPA's Information Processing Techniques Office, published "The Computer as a Communication Device," Science and Technology, April 1968. In this paper, extensively illustrated with whimsical cartoons, they described features of the future ARPANET and other aspects of anticipated human-computer interaction.

Honoring the artificial intelligence pioneer Oliver Selfridge, on pp. 38-39 of the paper they proposed a device they referred to as OLIVER (On-Line Interactive Vicarious Expediter and Responder). OLIVER was one of the clearest early descriptions of a computerized personal assistant:

"A very important part of each man's interaction with his on-line community will be mediated by his OLIVER. The acronym OLIVER honors Oliver Selfridge, originator of the concept. An OLIVER is, or will be when there is one, an 'on-line interactive vicarious expediter and responder,' a complex of computer programs and data that resides within the network and acts on behalf of its principal, taking care of many minor matters that do not require his personal attention and buffering him from the demanding world. 'You are describing a secretary,' you will say. But no! secretaries will have OLIVERS.

"At your command, your OLIVER will take notes (or refrain from taking notes) on what you do, what you read, what you buy and where you buy it. It will know who your friends are, your mere acquiantances. It will know your value structure, who is prestigious in your eyes, for whom you will do with what priority, and who can have access to which of your personal files. It will know your organizations's rules pertaining to proprietary information and the government's rules relating to security classification.

"Some parts of your OLIVER program will be common with parts of ther people's OLIVERS; other parts will be custom-made for you, or by you, or will have developed idiosyncracies through 'learning based on its experience at your service."

View Map + Bookmark Entry

Cybernetic Serendipity: The First Widely-Attended International Exhibition of Computer Art August 2 – October 20, 1968

From August 2  to October 20, 1968 Cybernetic Serendipity: The Computer and the Arts was exhibited at the Institute of Contemporary Arts in London, curated by British art critic, editor, and Assistant Director of the Institute of Contemporary Arts Jasia Reichardt, at the suggestion of Max Bense. This was the first widely attended international exhibition of computer art, and the first exhibition to attempt to demonstrate all aspects of computer-aided creative activity: art, music, poetry, dance, sculpture, animation.

In the video below Jasia Reichardt introduced the exhibition:

"It drew together 325 participants from many countries; attendance figures reached somewhere between 45,000 and 60,000 (accounts differ) and it received wide and generally positive press coverage ranging from the Daily Mirror newspaper to the fashion magazine Vogue. A scaled-down version toured to the Corcoran Gallery in Washington DC and then the Exploratorium, the museum of science, art and human perception in San Francisco. It took Reichardt three years of fundraising, travelling and planning" (Mason, a computer in the art room. the origins of british computer arts 1950-80 [2008] 101-102)

For the catalogue of the show Reichardt edited a special issue of Studio International magazine, consisting of 100 pages with 300 images, publication of which coincided with the exhibition in 1968. The color frontispiece reproduced a color computer graphic by the American John C. Mott-Smith "made by time-lapse photography successively exposed through coloured filters, of an oscilloscope connected to a computer." The cover of the special issue was designed by the Polish-British painter, illustrator, film-maker, and stage designer Franciszka Themerson, incorporating computer graphics from the exhibition. Laid into copies of the special issue were 4 leaves entitled "Cybernetic Serendipity Music," each page providing a program for one of eight tapes of music played during the show. This information presumably was not available in time to be printed in the issue of Studio International.

Reichardt's Introduction  (p. 5) included the following:

"The exhibition is divided into three sections, and these sections are represented in the catalogue in a different order:

"1. Computer-generated graphics, computer-animated films, computer-composed and -played music, and computer poems and texts.

"2. Cybernetic devices as works of art, cybernetic enironments, remoted-control robots and painting machines.

"3. Machines demonstrating the uses of computers and an environment dealing with the history of cybernetics.

"Cybernetic Sernedipity deals with possibilites rather than achievements, and in this sense it is prematurely optimistic. There are no heroic claims to be made because computers have so far neither revolutionized music, nor art, nor poetry, the same way that they have revolutionized science.

"There are two main points which make this exhibition and this catalogue unusual in the contexts in which art exhibitions and catalogues are normally seen. The first is that no visitor to the exhibition, unless he reads all the notes relating to all the works, will know whether he is looking at something made by an artist, engineer, mathematician, or architect. Nor is it particularly important to know the background of all the makers of the various robots, machines and graphics- it will not alter their impact, although it might make us see them differently.

"The other point is more significant.

"New media, such as plastics, or new systems such as visual music notation and the parameters of concrete poetry, inevitably alter the shape of art, the characteristics of music, and content of poetry. New possibilities extend the range of expression of those creative poeple whom we identify as painters, film makers, composers and poets. It is very rare, however, that new media and new systems should bring in their wake new people to become involved in creative activity, be it composiing music drawing, constructing or writing.

"This has happened with the advent of computers. The engineers for whom the graphic plotter driven by a computer represented nothing more than a means of solving certain problems visually, have occasionally become so interested in the possibilities of this visual output, that they have started to make drawings which bear no practical application, and for which the only real motives are the desire to explore, and the sheer pelasure of seeing a drawing materialize. Thus people who would never have put pencil to paper, or brush to canvas, have started making images, both still and animated, which approximate and often look identical to what we call 'art' and put in public galleries.

"This is the most important single revelation of this exhibition." 

Some copies of the special issue were purchased by Motif Editions of London.  Those copies do not include the ICA logo on the upper cover and do not print the price of 25s. They also substitute two blanks for the two leaves of ads printed in the back of the regular issue. They do not include the separate 4 leaves of programs of computer music.  These special copies were sold by Motif Editions with a large  (75 x 52 cm) portfolio containing seven 30 x 20 inch color lithographs with a descriptive table of contents. The artists included Masao Komura/Makoto Ohtake/Koji Fujino (Computer Technique Group); Masao Komura/Kunio Yamanaka (Computer Technique Group); Maugham S. Mason, Boeing Computer Graphics; Kerry Starnd, Charles "Chuck" Csuri/James Shaffer & Donald K. Robbins/ The art works were titled respectively 'Running Cola is Africa', 'Return to Square', 'Maughanogram', 'Human Figure', 'The Snail', 'Random War' & '3D Checkerboard Pattern'.  Copies of the regular edition contained a full-page ad for the Motif Editions portfolio for sale at £5 plus postage or £1 plus postage for individual prints.

In 1969 Frederick A. Praeger Publishers of New York and Washington, DC issued a cloth-bound second edition of the Cybernetic Serendipity catalogue with a dust jacket design adapted from the original Studio International cover. It was priced $8.95. The American edition probably coincided with the exhibition of the material at the Corcoran Gallery in Washington. The Praeger edition included an index on p. 101, and no ads. Comparison of the text of the 1968 and 1969 editions shows that the 1969 edition contains numerous revisions and changes.

In 2005 Jasia Reichardt looked back on the exhibition with these comments:

"One of the journals dealing with the Computer and the Arts in the mid-sixties, was Computers and the Humanities. In September 1967, Leslie Mezei of the University of Toronto, opened his article on 'Computers and the Visual Arts' in the September issue, as follows: 'Although there is much interest in applying the computer to various areas of the visual arts, few real accomplishments have been recorded so far. Two of the causes for this lack of progress are technical difficulty of processing two-dimensional images and the complexity and expense of the equipment and the software. Still the current explosive growth in computer graphics and automatic picture processing technology are likely to have dramatic effects in this area in the next few years.' The development of picture processing technology took longer than Mezei had anticipated, partly because both the hardware and the software continued to be expensive. He also pointed out that most of the pictures in existence in 1967 were produced mainly as a hobby and he discussed the work of Michael Noll, Charles Csuri, Jack Citron, Frieder Nake, Georg Nees, and H.P. Paterson. All these names are familiar to us today as the pioneers of computer art history. Mezei himself too was a computer artist and produced series of images using maple leaf design and other national Canadian themes. Most of the computer art in 1967 was made with mechanical computer plotters, on CRT displays with a light pen or from scanned photographs. Mathematical equations that produced curves, lines or dots, and techniques to introduce randomness, all played their part in those early pictures. Art made with these techniques was instantaneously recognisable as having been produced either by mechanical means or with a program. It didn't actually look as if it had been done by hand. Then, and even now, most art made with the computer carries an indelible computer signature. The possibility of computer poetry and art was first mentioned in 1949. By the beginning of the 1950s it was a topic of conversation at universities and scientific establishments, and by the time computer graphics arrived on the scene, the artists were scientists, engineers, architects. Computer graphics were exhibited for the first time in 1965 in Germany and in America. 1965 was also the year when plans were laid for a show that later came to be called 'Cybernetic Serendipity' and presented at the ICA in London in 1968. It was the first exhibition to attempt to demonstrate all aspects of computer-aided creative activity: art, music, poetry, dance, sculpture, animation. The principal idea was to examine the role of cybernetics in contemporary arts. The exhibition included robots, poetry, music and painting machines, as well as all sorts of works where chance was an important ingredient. It was an intellectual exercise that became a spectacular exhibition in the summer of 1968" (http://www.medienkunstnetz.de/exhibitions/serendipity/images/1/, accessed 06-16-2012). This website reproduces photographs of the actual exhibition and a poster printed for the show.

View Map + Bookmark Entry

The First ATM is Installed at Chemical Bank in New York Circa 1969 – 1970

In 1969 or 1970 the first automatic teller machine (ATM) was installed. Dates conflict as to whether this was in 1969 or slightly later. The first machine installed at Chemical Bank in New York may have been only a cash dispenser.

View Map + Bookmark Entry

Compuserve, the First Commercial Online Service, is Founded 1969

In 1969 Compuserve was founded in Columbus, Ohio, as a way to generate income from Golden United Life Insurance mainframe computers during non-business hours. Compuserve became the first commercial online service in the United States.

View Map + Bookmark Entry

EMS Produces the First Digital Sampler in the First Digital Music Studio Circa 1969

"The first digital sampler was the EMS (Electronic Music Studios) Musys system developed by Peter Grogono (software), David Cockerell (hardware and interfacing) and Peter Zinovieff (system design and operation) at their London (Putney) Studio c. 1969. The system ran on two mini-computers, a pair of Digital Equipment’s PDP-8s. These had the tiny memory of 12,000 (12k) bytes, backed up by a hard drive of 32k and by tape storage (DecTape)—all of this absolutely minuscule by today’s standards. Nevertheless, the EMS equipment was used as the world’s first music sampler and the computers were used to control the world's first digital studio" (Wikipedia article on Sampler (musical instrument), with hyperlinks that I added, accessed 08-29-2009).

View Map + Bookmark Entry

A Problem with the Apollo 11 Guidance Computer Nearly Prevents the First Moon Walk July 21, 1969

On July 21, 1969 Neil Armstrong, commander of the Apollo 11 lunar landing mission, and Edwin "Buzz" Aldrin, lunar module pilot, became the first human beings to walk on the moon. A Saturn V rocket launched the Command Module, Service Module ("Columbia") and Lunar Module ("Eagle") from the Kennedy Space Center Launch Complex 39 in Merritt Island, Florida.

The moon landing was almost canceled in the final seconds because of an overload of the Apollo Guidance Computer’s memory, but on advice from Earth, Armstrong and Aldren ignored the warnings and landed safely. The Apollo Guidance Computer was the first recognizably modern embedded system used in real-time by astronaut pilots.

View Map + Bookmark Entry

1970 – 1980

Xerox PARC is Founded 1970

In 1970 Xerox opened the Palo Alto Research Center (PARC). PARC became the incubator of the Graphical User Interface (GUI), the mouse, the WYSIWYG text editor, the laser printer, the desktop computer, the Smalltalk programming language and integrated development environment, Interpress (a resolution-independent graphical page description language and the precursor to PostScript), and Ethernet.

View Map + Bookmark Entry

The Kenback-1, the First Stored-Program "Personal Computer" 1970 – 1971

In 1970 John Blankenbaker of Kenback Corporation, Northridge, California, designed and produced the Kenbak-1.  The machines, of which only forty were ever built, were designed as educational tools and offered for sale in Scientific American and Computerworld for $750 in 1971.  The company folded in 1973.

Unlike many earlier machines and calculating engines, the Kenbak-1 was a true stored-program computer that offered 256 bytes of memory, a wide variety of operations and a speed equivalent to nearly 1MHz. It was thus the first stored-program personal computer.

"Since the Kenbak-1 was invented before the first microprocessor, the machine didn't have a one-chip CPU but instead was based purely on discrete TTL chips. The 8-bit machine offered 256 bytes of memory (=1/4096 megabyte). The instruction cycle time was 1 microsecond (equivalent to an instruction clock speed of 1 MHz), but actual execution speed averaged below 1000 instructions per second due to architectural constraints such as slow access to serial memory.

"To use the machine, one had to program it with a series of buttons and switches, using pure machine code. Output consisted of a series of lights" (Wikipedia article on Kenbak-1, accessed 09-19-2013).

In 2013 John Blankenbaker's detailed account of the design, production, and operation of the Kenbak-1 was available from his website, www.kenbak.-1.net.

Also in 2013, "Official Kenbak-1 Reproduction Kits" were available from www.kenbakkit.com.

View Map + Bookmark Entry

Rand Issues the First Systematic Review of Computer Security Issues February 1970

In February 1970 The Rand Corporation, Santa Monica, California, published the classified report of the Defense Science Board Task Force on Computer Security, Security Controls for Computer Systems.

Security Controls for Computer Systems was the first systematic review of computer security problems.

View Map + Bookmark Entry

"Computer Space," the First Commercially Sold Coin-Operated Video Game November 1971

In November 1971 Nutting Associates of Mountain View, California, released the video arcade game Computer Space, created by Nolan Bushnell and Ted Dabney. It was an adaptation of Spacewar (1962).

Computer Space was the first commercially sold coin-operated video game, predating the Magnavox Odyssey by six months, and Atari's Pong by one year.

View Map + Bookmark Entry

Pong: The First Commercially Successful Video Game September 1972

On June 27, 1972 Nolan Bushnell and Ted Dabney founded Atari in Sunnyvale, California, and hired Allan Alcorn to design the table tennis (ping-pong) game “PONG.” Pong was the first commercially successful video game (videogame).

Alcorn produced the prototype, and in September 1972 Bushnell and Alcorn placed the first prototype of the game in Andy Capp’s bar in Sunnyvale. Measured by the number of quarters in the coin box of the game, it was judged a remarkable success. Part of its success may have been its simplicity and intuitive nature, which made the game very easy to learn.

Based on this almost comically limited market research, the company announced the release of Pong on November 29, 1972. In keeping with the small-time nature of the business, management sought unskilled assembly workers at the local unemployment office, and was unable to keep up with demand. The first arcade cabinets produced were assembled very slowly— about ten machines a day— many of which failed quality testing. Atari eventually streamlined the process, and began producing the game in greater quantities. Production began in 1973.

Lowood, "Videogames in Computer Space: The Complex History of Pong," IEEE Annals of the History of Computing 31, #3 (2009) 5-19.

(This entry was last revised on April 21, 2014.)

View Map + Bookmark Entry

The Plato IV System, Probably the World's First Online Community 1973

Probably the world's first online community began to emerge in 1973 through online forums, and the message board called PLATO Notes developed by David R. Woolley, in the PLATO IV system evolving at the University of Illinois at Urbana-Champaign.

View Map + Bookmark Entry

"Community Memory," the First Public Computerized Bulletin Board System 1973

In 1973 Efrem Lipkin, Mark Szpakowski, and Lee Felsenstein established the first public computerized bulletin board system (BBS) called Community Memory in Berkeley, California. Community Memory used hard-wired terminals in neighborhoods as distinct from the first public dial-up CBBS which was set up on February 16, 1978.

"Community Memory ran off an XDS-940 timesharing computer located in Resource One in San Francisco. The first terminal was an ASR-33 Teletype at the top of the stairs leading to Leopold's Records in Berkeley. You could leave messages and attach keywords to them. Other people could then find messages by those keywords.

"The line from San Francisco to Berkeley ran at 110 baud - 10 characters per second. The teletype was noisy, so it was encased in a cardboard box, with a transparent plastic top so you could see what was being printed out, and holes for your hands so you could type. It made for some magic moments with the Allman Brothers' "Blue Sky" playing in the record store. Musicians loved it - they ended up generating a monthly printout of fusion rock bassists seeking raga lead guitars. And out of it also emerged the first net personality - Benway, as he called himself."

View Map + Bookmark Entry

Invention of the Word "Internet" Circa 1973

Around 1973 Vinton G. Cerf and Robert E. Kahn invented the word Internet as an abbreviation for the "inter-networking of networks" (Segaller, Nerds 2.0.1: A Brief History of the Internet [1998] 111).

View Map + Bookmark Entry

The Code of Fair Information Practice July 1973

In July 1973 Records, Computers, and the Rights of Citizens was published. This was the report of the Advisory Committee on Automated Personal Data Systems appointed by Elliot L. Richardson, secretary of the Department of Health, Education and Welfare. The report explored the impact of computerized record keeping on individuals, and recommended a Code of Fair Information Prractice, consisting of five basic principles:

1."There must be no data record-keeping systems whose very existence is secret." 

2."There must be a way for an individual to find out what information about him is in a record and how it is used."

3."There must be a way for an individual to prevent information about him obtained for one purpose from being used or made available for other purposes without his consent." 

4. "There must be a way for an individual to correct or amend a record of identifiable information about him."

5. "Any organization creating, maintaining, using or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take reasonable precautions to prevent misuse of the data."

View Map + Bookmark Entry

The Privacy Act of 1974 May 1974

As a result of the Report of the Advisory Committee on Automated Personal Data Systems (July 1973), Congress passed the Privacy Act of 1974.

View Map + Bookmark Entry

200,000 Computers are Operating in the U. S. 1975

It was estimated that 200,000 computers were operating in the United States in 1975. Nearly all of these were mainframes and minicomputers.

View Map + Bookmark Entry

Byte Magazine, One of the First Personal Computer Magazines, Begins Publication 1975

In 1975 Byte, one of the first personal computer magazines, began publication in Peterborough, New Hampshire.

View Map + Bookmark Entry

The MITS Altair, the First Personal Computer to Get "Wide Notice" Among Enthusiasts January 1975 – 1976

In January 1975 H. Edward Roberts, working in Albuquerque, New Mexico, announced the MITS (Micro Instrumentation Telemetry Systems) Altair personal computer kit in an article in Popular Electronics magazine. The MITS Altair was first personal computer based on the Intel 8080 general-purpose microprocessor, and the first personal computer to get "wide notice" among enthusiasts. It also had an open architecture. The basic Altair 8800 sold for $397.

In March 1976 the first (and only) World Altair Computer Convention, took place in Albuquerque, New Mexico. Organized by David Bunnell of MITS, it was the world's first personal computer conference, and was an overwhelming success, with 700 people from 46 states and seven countries attending.

View Map + Bookmark Entry

Landmarks in the Prehistory and Early History of Microsoft April 4, 1975 – November 20, 1985

In Seattle, Washington, from 1973-74 high school students Bill Gates and Paul Allen, and Paul Gilbert founded a partnership called Traf-O-Data. The objective was to read the raw data from roadway traffic counters and create reports for traffic engineers. Even though this initial project was not a success, the experience that Gates and Allen gained in writing software for a non-existent computer they applied shortly thereafter in writing software for the MITS Altair. 

"Bill Gates and Paul Allen were high school students at Lakeside School in Seattle. The Lakeside Programmers Group got free computer time on various computers in exchange for writing computer programs. Gates and Allen thought they could process the traffic data cheaper and faster than the local companies. They recruited classmates to manually read the hole-patterns in the paper tape and transcribe the data onto computer cards. Gates then used a computer at the University of Washington to produce the traffic flow charts. (Paul Allen's father was a librarian at UW.) This was the beginning of Traf-O-Data.

"The next step was to build a device to read the traffic tapes directly and eliminate the tedious manual work. The Intel 8008 microprocessor was announced in 1972 and they realized it could read the tapes and process the data. Allen had graduated and was enrolled at Washington State University. Since neither Gates nor Allen had any hardware design experience they were initially stumped. The computer community in Seattle at that time was relatively small. Gates and Allen had a friend, Paul Wennberg who like them had hung around CDC Corporation near the University of Washington cadging open time on the mainframe. Wennberg, founder of the Triakis Corporation, was then an electrical engineering student at the University of Washington. In the course of events Gates and Allen mentioned they were looking for somebody to build them a computer for free. They needed somebody good enough to build a computer from parts and the diagrams found in a computer magazine. It was Wennberg who came up with the man to do just that. After discussion with another friend, Wes Prichard, Prichard suggested to Wennberg that Gates and Allen head over UW Physics building to where Gilbert, another EE student worked in the high-energy tracking lab. It was there that Paul Gilbert was approached by the duo to become a partner in Traf-O-Data. That year Gilbert, piece by piece, wire wrapped, soldered and, assembled from electrical components the (world's first?) working microcomputer. Miles Gilbert, Paul Gilbert's brother, a graphic designer and draftsman, helped the fledgling company by designing the company's logo. Gates and Allen started writing the software. To test the software while the computer was being designed, Paul Allen wrote a computer program on WSU's IBM 360 that would emulate the 8008 microprocessor.

"The computer system was completed and Traf-O-Data produced a few thousand dollars of revenue. Later the State of Washington offered free traffic processing services to cities, ending the need for private contractors, and all three principals moved on to other projects. The real contribution of Traf-O-Data was the experience that Gates and Allen gained developing software for computer hardware that did not exist. Paul Gilbert, sometimes referred to as "the hardware guy", was the man who made Traf-O-Data work. Without his efforts in the construction of this computer, and the day-to-day running of this pioneering company, the rise of what became Microsoft might have been delayed" (Wikipedia article on Traf-O-Data, accessed 07-13-2011).

On April 4, 1975 Bill Gates and Paul Allen officially founded Micro-Soft (Microsoft) in Albuquerque, New Mexico, with Gates as CEO. Allen invented the original  company name, "Micro-Soft."  The initial purpose of the company was to develop an implementation of the programming language BASIC for the MITS Altair personal computer. Revenues of the company totaled $16,005 by the end of 1976.

"Within a year, the hyphen was dropped, and on November 26, 1976, the trade name "Microsoft" was registered with the Office of the Secretary of the State of New Mexico." (Wikipedia article on Bill Gates, accessed 07-13-2011).

Early in 1975 Gates, Allen, and Monte Davidoff wrote a version of the Basic programming language that ran on the MITS Altair 8800.

"After reading the January 1975 issue of Popular Electronics that demonstrated the Altair 8800, Gates contacted Micro Instrumentation and Telemetry Systems (MITS), the creators of the new microcomputer, to inform them that he and others were working on a BASIC interpreter for the platform. In reality, Gates and Allen did not have an Altair and had not written code for it; they merely wanted to gauge MITS's interest. MITS president Ed Roberts agreed to meet them for a demo, and over the course of a few weeks they developed an Altair emulator that ran on a minicomputer, and then the BASIC interpreter. The demonstration, held at MITS's offices in Albuquerque, was a success and resulted in a deal with MITS to distribute the interpreter as Altair BASIC." (Wikipedia article on Bill Gates, accessed 07-13-2011).

Called Altair Basic, or in its first iteration, MITS 4K Basic, the program was written without access to an Altair computer or even an 8080 CPU. Altair Basic was the first computer language written for a personal computer, and the first product of "Micro-Soft," which in 1976 was renamed Microsoft.

On February 3, 1976 Gates, in his role as "General Partner Micro-Soft", Albuquerque, New Mexico, wrote An Open Letter to Hobbyists, making the distinction between proprietary and open-source software.

Gates's one page letter was first pubished in Computer Notes1, #9 (February 1976). Computer Notes was the house journal of MITS, the company that developed the MITS Altair 8800 and licensed Micro-Soft's version of BASIC.

In December 1980 IBM hired Paul Allen and Bill Gates of Microsoft, then in Bellevue, Washington, to create an operating system (OS) for the new IBM personal computer under development.

Because Microsoft had no OS at the time, they purchased a non-exclusive license to sell a CP/M clone called QDOS ("Quick and Dirty Operating System") from Tim Patterson of Seattle Computer Products for $25,000.

In September 1983 Microsoft introduced Microsoft Word 1.0 for MS-DOS. This was the first word processor to make extensive use of the computer mouse.

On November 20, 1985 Microsoft introduced Windows 1.0 for the PC. Rather than a completely new operating system, Windows 1.0 was a graphical user interface (GUI) multi-tasking operating environment extension of MS-DOS.

My thanks to Chris Morgan for drawing my attention to Gates and Allen's early experience with Traf-O-Data.

(This entry was last revised on 01-18-2015.)

View Map + Bookmark Entry

The World Event/Interaction Survey: A Pioneering Application of Systems Theory to International Relations 1976

Developed by American political scientist and systems analysist Charles A. McClelland, the World Event/Interaction Survey (WEIS) was a pioneering application of Systems Theory to international relations. It was a record of the flow of action and response between countries (as well as non-governmental actors, e.g., NATO) reflected in public events reported daily in The New York Times from January 1966 through December 1978. The unit of analysis in the dataset was the event/interaction, referring to words and deeds communicated between nations, such as threats of military force. Each event/interaction was a daily report of an international event. For each event the actor, target, date, action category, and arena were coded as well as a brief textual description. 98,043 events were included in the dataset.

Charles A. McClelland,  World Event/Interaction Survey Codebook (ICPSR 5211). Ann Arbor, Michigan: Inter-University Consortium for Political and Social Research, Ann Arbor, 1976.

View Map + Bookmark Entry

Foundation of Apple Computer and the Origin of the Name April 1, 1976 – December 13, 2011

On April 1, 1976 Steve JobsSteve "The Woz" Wozniak and Ronald G. Wayne signed the contract founding Apple Computer, then designated as Apple Computer Company.

Wayne relinquished his 10% stake in the company for $800, only 12 days later, on April 12, 1976.

In an interview done in the mid-1980s Steve Wozniak and the late Steve Jobs recalled how they named their upstart computer company some 35 years ago.

" 'I remember driving down Highway 85,' Wozniak says. 'We're on the freeway, and Steve mentions, 'I've got a name: Apple Computer.' We kept thinking of other alternatives to that name, and we couldn't think of anything better.'

"Adds Jobs: 'And also remember that I worked at Atari, and it got us ahead of Atari in the phonebook.' " (http://www.artdaily.org/index.asp?int_sec=2&int_new=52707, accessed 12-30-2011).

In November 1997 Stanford University acquired the historical archives for the early history of Apple Computer.

♦ On December 13, 2011 Sotheby's sold as lot 244 in their Fine Books and Manuscripts sale in New York Wayne's copy of the original contract document for $1,594,500, including buyer's premium, to Cisneros Corporation CEO Eduardo Cisneros. This was the highest price paid to date for anything related to the history of computing.

View Map + Bookmark Entry

Apple I: The First Personal Computer Sold as a Fully Assembled Product 1977

In 1977 Apple Computer introduced the Apple II, the first personal computer sold as a fully assembled product, and the first with color graphics. When the first spreadsheet program, Visicalc, was introduced for the Apple II in 1979 it greatly stimulated sales of the computer as people bought the Apple II just to run Visicalc.

View Map + Bookmark Entry

The First Intentional Spam May 1, 1977

On May 1, 1977 Gary Thuerk, a Digital Equipment Corporation (DEC) sales representative, attempted to send the first intentional commercial spam to every ARPANET address on the West Coast of the U.S. Thuerek thought that Arpanet users would find it cool that DEC had integrated ARPANET protocol support directly into the new DECSYSTEM-20 and TOPS-20 OS.

View Map + Bookmark Entry

The Network Nation 1978

In 1978 Starr Roxanne Hiltz, a sociologist at Upsala College, East Orange, New Jersey, and her husband, Murray Turoff, a professor of computer science, showed how "computer-mediated communication" could develop social networking in their book The Network Nation: Human Communication via Computer.

View Map + Bookmark Entry

Probably the First U. S. Legislation against Computer Crimes 1978

In 1978 the State of Florida passed Fla. Stat. 815.01, the "Florida Computer Crimes Act". This law, which included legislation against the unauthorized modification or deletion of data on a computer system, and against damage to computer hardware including networks, may be the earliest American statute specifically against computer crimes. The maximum penalty for a single offense classified as a Felony of the Third Degree was:

"Up to 5 years of imprisonment and a fine of up to $5,000 or any higher amount equal to double the pecuniary gain derived from the offense by the offender or double the pecuniary loss suffered by the victim."

View Map + Bookmark Entry

The First Dial-UP CBBS February 16, 1978

On February 16, 1978 Ward Christensen founded the Computerized Bulletin Board System (CBBS), the first dial-up bulletin board system (BBS) ever brought online, as a program to allow Christensen and other hobbyists in Chicago to exchange information. This was distinct from Community Memory, a BBS established in Berkeley in 1973, that used hard-wired terminals placed around the town.

"In January 1978, Chicago was hit by the Great Blizzard of 1978, which dumped record amounts of snow throughout the midwest. Among those caught in it were Christensen and Randy Suess, who were members of CACHE, the Chicago Area Computer Hobbyists' Exchange. They had met at that computer club in the mid 1970s and become friends.

"Christensen had created a file transfer protocol for sending binary computer files through modem connections, which was called, simply, MODEM. Later improvements to the program motivated a name change into the now familiar XMODEM. The success of this project encouraged further experiments. Christensen and Suess became enamored of the idea of creating a computerized answering machine and message center, which would allow members to call in with their then-new modems and leave announcements for upcoming meetings.

"However, they needed some quiet time to set aside for such a project, and the blizzard gave them that time. Christensen worked on the software and Suess cobbled together an S-100 computer to put the program on. They had a working version within two weeks, but claimed soon afterwards that it had taken four so that it wouldn't seem like a "rushed" project. Time and tradition have settled that date to be February 16, 1978.

"Because the Internet was still small and not available to most computer users, users had to dial CBBS directly using a modem. Also because the CBBS hardware and software supported only a single modem for most of its existence, users had to take turns accessing the system, each hanging up when done to let someone else have access. Despite these limitations, the system was seen as very useful, and ran for many years and inspired the creation of many other bulletin board systems.

"Ward & Randy would often watch the users while they were online and comment or go into chat if the subject warranted. Sometime online users wondered if Ward & Randy actually existed.

"The program had many forward thinking ideas, now accepted as canon in the creation of message bases or "forums" (Wikipedia article on CBBS, accessed 04-27-2009).

View Map + Bookmark Entry

Compuserve 1979 – 1980

In 1979 Compuserve, of Columbus, Ohio, became the first online service to offer personal computer users email communication and online technical support. The following year it offered real-time chat online with its CB simulator, first dedicated online chat service widely available to the public.

View Map + Bookmark Entry

Origins of the Computer History Museum September 1979

In September 1979 Gordon and Gwen Bell, with the assistance Digital Equipment Corporation, founded the Digital Computer Museum in Boston. This evolved into the Computer History Museum in Mountain View, California.

View Map + Bookmark Entry

1980 – 1990

IBM Introduces the IBM 5150- The IBM PC August 12, 1981

On August 12, 1981 IBM introduced their open architecture personal computer (PC) based on the Intel 8088 processor. The IBM PC  ran PC-DOS, the IBM-branded version of the 16-bit operating system, MS-DOS, provided by Microsoft. The machine was originally designated as the IBM 5150, putting it in the "5100" series, though its architecture was not directly descended from the IBM 5100.

On August 1, 1981 a review of the IBM PC appeared on USENET (accessed 10-16-2009).

View Map + Bookmark Entry

The First Computer Virus Spread by Floppy Disk 1982

A program called 'Elk Cloner' is credited with being the first computer virus to appear outside the single computer or lab where it was created. Written by Rich Skrenta, it attached itself to the Apple DOS 3.3 operating system and spread by floppy disk.

View Map + Bookmark Entry

William Gibson Coins the Word Cyberspace July 1982

In July 1982 American-Canadian writer William Gibson coined the word "cyberspace" in his story, Burning Chrome, published in Omni magazine.

"It tells the story of two hackers who hack systems for profit. The two main characters are Bobby Quine who specializes in software and Automatic Jack whose field is hardware. A third character in the story is Rikki, a girl with whom Bobby becomes infatuated and for whom he wants to hit it big. Automatic Jack acquires a piece of Russian hacking software that is very sophisticated and hard to trace. The rest of the story unfolds with Bobby deciding to break into the system of a notorious and vicious criminal called Chrome, who handles money transfers for organized crime, and Automatic Jack reluctantly agreeing to help. One line from this story — "...the street finds its own uses for things" — has become a widely-quoted aphorism for describing the sometimes unexpected uses to which users can put technologies (for example, hip-hop DJs' reinvention of the turntable, which transformed turntables from a medium of playback into one of production)" (Wikipedia article on Hackers (anthology), accessed 11-26-2010).

View Map + Bookmark Entry

The "Trash" 80: The First Notebook Computer? 1983

In 1983 the TRS-80, Model 100, made by Kyocera, Kyoto, Japan, and marketed in the U.S. in Radio Shack stores owned by Tandy Corporation of Fort Worth, Texas, introduced the concept of a "notebook" computer. More than 6,000,000 TRS-80s were sold; the introductory price was $1099.00.

View Map + Bookmark Entry

6,000,000 Personal Computers are Sold in the U.S. 1983

In 1983 six million personal computers were sold in the United States.

View Map + Bookmark Entry

The Earliest Fictional Treatment of Word Processing by a Prominent Literary Author January 1983

A short story by American novelist, short story writer, screenwriter, columnist, actor, television producer, film director Stephen King published in the January 1983 issue of Playboy Magazine, under the title of "The Word Processor", may be the earliest fictional treatment of word processing by a prominent literary author. This story, which King wrote on a Wang dedicated word processing microcomputer known as the Wang System 5, was later retitled "Word Processor of the Gods."

View Map + Bookmark Entry

Time Magazine's 1983 "Machine of the Year" is a Personal Computer January 3, 1983 – January 3, 2013

Time Magazine's January 3, 1983 issue, published in print at the end of 1982, featured the personal computer as "Machine of the Year", in distinction to its traditional feature known as "Man of the Year." The cover of the issue depicted a white plaster man by sculptor George Segal contemplating a concept persdonal computer which Time commissioned from a design firm. 

Thirty years later, on January 3, 2013, Time reissued the January 3, 1983 issue as a downloadable bonus for its iPad, Android, Kindle and Nook subscribers, with a new introduction by Henry McCracken. That the reissue was produced in electronic form, rather than print, summarized the enormous changes that occurred in the creation, distribution, and storage of information during those three decades. McCracken summarized his introduction to the reissue in his "Technologizer" column of January 4, 2013, from which I quote:

"When TIME put together the issue, the PC revolution was still young. (The vast majority of homes didn’t yet have one.) But it wasn’t that young: The MITS Altair 8800, the first PC that mattered, came out in 1975. In 1977, it was followed by the Apple II, Commodore’s PET 2001 and Radio Shack’s TRS-80, the first truly consumery, ready-to-use machines. And another half-decade of evolution occurred before TIME commemorated the PC’s arrival so memorably.

"In retrospect, what the 21-page Machine of the Year cover package captures isn’t the beginning of the PC so much as the end of the beginning. The industry still had room for a bevy of hobbyist-oriented, sometimes downright rudimentary computers from Apple, Atari, Commodore, Osborne, Radio Shack, Texas Instruments, Timex (!) and others. None of them had futuristic features like a graphical user interface and a mouse; most ran their own operating systems and weren’t compatible with anything else on the market.

"Here and there, though, the issue hints at the changes which would really get underway in 1983. It mentions the IBM PC, which had shipped in 1981, and says that it’s setting standards for the whole industry. But it doesn’t talk about the phenomenon which would dominate the business by the middle of the decade: IBM PC-compatible “clones” which could run the same software as Big Blue’s system. That’s because there was only one clone in existence. (The second, Compaq’s massively successful, sewing machine-sized 'portable,' showed up in March 1983.)....

"As I wrote in my introduction for the tablet reissue, much has changed about computers since 1983. But one of the striking things about the issue is that it’s jam-packed with reminders of what hasn’t changed. Most of the things we do with PCs, tablets and phones in 2013 are in there: e-mail, games, word processing, learning, personal finance, music and cloud services. (O.K., in the 1980s, they weren’t called cloud services — they were known as 'mainframes.')"

View Map + Bookmark Entry

The GNU Free Software Project September 23, 1983

On September 23, 1983 Richard Stallman of MIT announced the GNU free software project on the net.unix-wizards and net.usoft newsgroups.

View Map + Bookmark Entry

"Cyberspace" Popularized 1984

In 1984 American-Canadian writer William Gibson popularized the term “cyberspace” in his cyberpunk novel Neuromancer.

"The portion of Neuromancer cited in this respect is usually the following:

"Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts... A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.

" . . . . Gibson later commented on the origin of the term in the 2000 documentary No Maps for These Territories:

All I knew about the word "cyberspace" when I coined it, was that it seemed like an effective buzzword. It seemed evocative and essentially meaningless. It was suggestive of something, but had no real semantic meaning, even for me, as I saw it emerge on the page" (Wikipedia article on Cyberspace, accessed 11-26-2010).

Gibson coined the term cyberspace in his short story, Burning Chrome (1982).

View Map + Bookmark Entry

A Guide to All PC Software in One Volume 1984

In 1984 Stewart Brand issued The Whole Earth Software Catalog, to do for computing what the Whole Earth Catalog had done for the counterculture: identify and recommend the best tools as they emerged. Notably, at this early stage in the history of personal computing all available software could be well described in a single volume. A large glossy book published in Sausalito, California,  the Catalog was written in a glib, conversational style that took "most of the bugs out of microprocessing."

View Map + Bookmark Entry

One of the First Online Communities April 1, 1985

On April 1, 1985 Stewart Brand and Larry Brilliant founded The Whole Earth ‘Lectronic Link, one of the first online communities, in Sausalito, California. It later became  known as The WELL, and connected to the Internet in 1992.

View Map + Bookmark Entry

The Free Software Foundation October 4, 1985

On October 4, 1985 Richard Stallman of MIT founded the Free Software Foundation to support the free software movement.

View Map + Bookmark Entry

Influential on the Development of Cyberpunk 1986

In 1986 the magazine High Frontiers renamed itself Reality Hackers to better reflect its drug culture and computer themes. In 1989 once again it changed its name to Mondo 2000. In this form it influenced the development of cyberpunk culture until its closure in 1998.

View Map + Bookmark Entry

25,000,000 PCs Have Been Sold in the U.S. 1987

By 1987 25,000,000 PC’s were sold in the United States.

View Map + Bookmark Entry

Foundation of the First Commercial ISP May 12, 1987

In 1987 American computer scientist Richard L. Adams, Jr. founded in Northern Virginia UUNET Communications Services, the first commercial internet service provider. On May 12 UUNET passed its first traffic via the CompuServe Network using UUCP (Unix to Unix Copy Protocol).

"Although the ISP initially offered services only to research institutes and universities, it wasn't long before Adams began expanding operations. The launch of AlterNet in 1990 marked UUnet's first foray into commercial service, as well as its conversion to a for-profit company. The firm's new focus on the corporate sector paid off a few years later when it landed the contract to carry Internet traffic for the Microsoft Network, beating out competitors like AT&T Corp. and MCI Communications Corp. Adams took UUnet public in 1995, in one of the largest technology public offerings to date, and a year later agreed to a $2 billion buyout offer from MFS Communications, which was acquired by WorldCom shortly thereafter" (http://ecommerce.hostip.info/pages/2/Adams-Richard-L.html, accessed 02-28-2009).

View Map + Bookmark Entry

"Toward a National Research Telecommunications Network" November 1987

In 1987 C. Gordon Bell, as Chairman of the Subcommittee on Computer Networking, Infrastructure and Digital Communications of the Federal Coordinating Council on Science, Engineering and Technology, published A Report to the Office of Technology Policy on Computer Networks to Support Research in the United States. A Study of Critical Problems and Future Options. The report states:

“Over the next 15 years, there will be a need for a 100,000 times increase in national network capacity to enable researchers to exploit computer capabilities for representing complex data in visual form, for manipulating and interacting with this complex data and for sharing large data bases with other researchers.”

“As the first step, the current Internet system developed by the Defense Advanced Research Projects Agency and the networks supported by agencies for researchers should be interconnected. These facilities, if coordinated and centrally managed, have the capability to interconnect many computer networks into a single virtual computer network. As the second step, the existing computer networks that support research programs should be expanded and upgraded to serve 200-400 research institutions with 1.5 million bits per second capabilities.

“As the third step, network service should be provided to every research institution in the U.S., with transmission speeds of three billion bits per second.” (p. 3)

Bell summarized the report in an article called Toward A National Research Telecommunications Network.

View Map + Bookmark Entry

The First Computer-Animated Film to Win an Academy Award 1988

In 1988 Tin Toy by Pixar, Emeryville, California, became the first computer-animated film to win an Academy Award, for the "best animated short film."

"Tin Toy marked the first time a character with life-like bendable arms and knees, surfaces and facial components was animated digitally. The challenge was balancing it's 'cartoony' look with a baby's real looks."

View Map + Bookmark Entry

The First Operational Online Antiquarian Bookselling Site 1988

In 1988 Larry Costello founded Antiquarian Databases International (ADI). A Bulletin Board System (BBS), ADI was the first operational online antiquarian bookselling site, and an extremely early venture in ecommerce, but it closed after only a few months.

View Map + Bookmark Entry

Boing-Boing Begins as a Print Magazine 1988 – 2000

In 1988 Mark Frauenfelder and Carla Sinclair began publication on paper of the zine bOING bOING, "The World's Greatest Neurozine." The magazine became a founding influence in the development of cyberpunk. It became a website in 1995, and was relaunched as a blog—Boing Boing, "a directory of wonderful things," in 2000.

View Map + Bookmark Entry

The First Computer Worm to Attract Wide Attention November 2, 1988

The first computer worm to attract wide attention, the Morris worm or Internet worm, quickly infected a great number of computers on the Internet on November 2, 1988. It was written by Robert Tappan Morris, a graduate student at Cornell

"It propagated through a number of bugs in BSD Unix and its derivatives. Morris himself was convicted under the US Computer Crime and Abuse Act and received three years probation, community service and a fine in excess of $10,000."

View Map + Bookmark Entry

1990 – 2000

The Electronic Frontier Foundation is Founded 1990

In 1990 Mitchell Kapor, John Gilmore, and John Perry Barlow founded the Electronic Frontier Foundation in San Francisco, to defend individual rights in the digital world. The three had met on The Well.

Motivation for creation of the organization was the

“massive search and seizure on Steve Jackson Games by the United States Secret Service early in 1990.” The first successful achievement of the new foundation was to lay “the groundwork for the successful representation of Steven Jackson Games (SJG) in a Federal court case to prosecute the United States Secret Service for unlawfully raiding their offices and seizing computers.”

View Map + Bookmark Entry

Foundation of the Coalition for Networked Information 1990

In 1990 he Coalition for Networked Information (CNI) was founded in Washington, D.C. By the end of its first year its membership consisted of 18 institutions.

View Map + Bookmark Entry

Berners-Lee Plans the World Wide Web November 12, 1990

On November 12, 1990 Tim Berners-Lee at CERN, Geneva, Switzerland, issued World Wide Web: Proposal for a Hypertext Project.

View Map + Bookmark Entry

The First Web Page November 13, 1990

At CERN on November 13, 1990 Tim Berners-Lee wrote the first web page on a NeXT workstation.

View Map + Bookmark Entry

The First Web Browser and Web Server December 25, 1990

During the Christmas holiday, 1990 Tim Berners-Lee wrote the software tools necessary for a working World Wide Web:

1. The first web browser called WorldWideWeb.

2. A WYSIWYG HTML editor

3. The first Web serverCERN httpd. It was operational on Christmas Day 1990.

View Map + Bookmark Entry

"Clearing the Way for Electronic Commerce" 1991

In 1990 the National Science Foundation (NSF), Arlington, Virginia, lifted restrictions on the commercial use of the NSFNET Backbone Network, clearing the way for electronic commerce.

View Map + Bookmark Entry

Computer Professionals for Social Responsibility March 26 – March 28, 1991

Computer Professionals for Social Responsibility (CPSR) held the First Conference on Computers, Freedom & Privacy from March 26 to 28, 1991 in Burlingame, California.

View Map + Bookmark Entry

Berners-Lee Makes Web Server and Web Browser Software Available at No Cost August 6, 1991

WorldWideWeb - Executive Summary by Tim Berners-Lee of CERN, Geneva, Switzerland, posted on the alt.hypertext newsgroup on August 6, 1991, gave a short summary of the World Wide Web project, explained where to download a web server and line mode browser, and made it available all over the world at no cost.

"The WWW project merges the techniques of information retrieval and hypertext to make an easy but powerful global information system."

"The project started with the philosophy that much academic information should be freely available to anyone. It aims to allow information sharing within internationally dispersed teams, and the dissemination of information by support groups."

View Map + Bookmark Entry

One of the First U.S. Cases in Cyberspace Law October 29, 1991

On October 29, 1991 one of the first U.S. cases related to Cyberspace law was decided: Cubby v. CompuServe, 776 F. Supp. 135 (1991). It "suggested that online companies would not be liable for the acts of their customers. CompuServe exerted no control whatsoever over the presumably false and defamatory statements which were the subject of the suit; their forum sysops were independent entrepreneurs. Prior to this decision, the liability risk was largely undecided."

View Map + Bookmark Entry

The First Image Posted to the Web 1992

The first image posted to the web was a photograph of a CERN singing group called Les Horribles Cernettes posted in 1992.

View Map + Bookmark Entry

The Internet Society 1992

The Internet Society (ISOC) was chartered in 1992.  Its headquarters are in Reston, Virginia. In 2012 the society had 80 national chapters and 50,000 individual members.

View Map + Bookmark Entry

There are 50 Web Servers on the Internet 1992

In 1992 there were 50 Web Servers on the Internet.

View Map + Bookmark Entry

341,634 Percent Growth Rate on the Internet 1993

In 1993 traffic on the Internet expanded at a 341,634 percent growth rate.

View Map + Bookmark Entry

Perhaps the First Law Review Symposium Dedicated to Cyberspace 1993

The 1993 Villanova Law Review Symposium: The Congress, The Courts, and Computer-Based Communications Networks: Answering Questions About Access and Content Control was "perhaps the first law review symposium dedicated to cyberspace."

View Map + Bookmark Entry

Only About 2000 People in China Use the Internet 1993

In 1993 it was estimated that in China, a country with about 1,000,000,000 people, only about 2000 people used the Internet.

View Map + Bookmark Entry

There are 250 Web Servers on the Internet 1993

In 1993 there were 250 web servers on the Internet.

View Map + Bookmark Entry

The First Tablet Computer with Wireless Connectivity April 1993 – July 1994

In April 1993 AT&T introduced the AT&T EO Personal Communicator, the first tablet computer with wireless connectivity via a cellular phone. The device, which provided wireless voice, email, and fax communications, was developed by GO/Eo, a subsidiary of GO Corporation, both of which were acquired by AT&T in 1993. As advanced as it was, the AT&T Personal Communicator was probably far ahead of the market. EO Inc., 52% owned by AT&T, failed to meet its revenue targets and shut down on July, 1994.

"Two models, the Communicator 440 and 880 were produced and measured about the size of a small clipboard. Both were powered by the AT&T Hobbit chip, created by AT&T specifically for running code from the C programming language. They also contained a host of I/O ports - modem, parallel, serial, VGA out and SCSI. The device came with a wireless cellular network modem, a built-in microphone with speaker and a free subscription to AT&T EasyLink Mail for both fax and e-mail messages.

"Perhaps the most interesting part was the operating system, PenPoint OS, created by GO Corporation. Widely praised for its simplicity and ease of use, the OS never gained widespread use. Also equally compelling was the tightly integrated applications suite, Perspective, licensed to EO by Pensoft" (Wikipedia article on EO Personal Communicator, accessed 02-03-2010).

Ken Maki, The AT&T EO Travel Guide. (1993).

View Map + Bookmark Entry

CERN Releases Rights to World Wide Web Software April 30, 1993

On April 30, 1993 CERN, Geneva, Switzerland, published documents that released the World Wide Web software into the public domain.

View Map + Bookmark Entry

The First Commercial Website with the First Online Advertising May 1993

In May 1993 Tim O’Reilly, Sebastapol, California, launched the Global Network Navigator. This was the first web portal and the first true commercial website. According to a statement by Tim O'Reilly, it also contained the first online advertising. The Global Network Navigator was sold to America Online in 1995.

View Map + Bookmark Entry

There are 2500 Web Servers and 10,000 Websites 1994

In 1994 the number of websites reached 10,000. There were 2500 web servers on the Internet.

View Map + Bookmark Entry

Electronic Privacy Information Center (EPIC) 1994

In 1994 David Banisar, Marc Rotenberg, and David Sobel founded The Electronic Privacy Information Center (EPIC) in Washington, D.C. to focus public attention on emerging civil liberties issues and to protect privacy, the First Amendment, and constitutional values in the information age. EPIC was a joint project of the Fund for Constitutional Government and Computer Professionals for Social Responsibility.

View Map + Bookmark Entry

One of the Earliest Blogs January 1994

In January 1994 Justin Hall, a student at Swarthmore College, started his web-based diary, Justin's Links from the Underground, Links.net, offering one of the earliest guided tours of the web. This is considered one of the earliest blogs.

View Map + Bookmark Entry

"Selling Wine without Bottles" March 1994

John Perry Barlow, lyricist for The Grateful Dead, published in March 1994 issue of Wired magazine an article entitled The Economy of Ideas. A framework for patents and copyrights in the Digital Ages. (Everything you know about intellectual property is wrong.)

This, or a very similar text, was also issued under the title of: Selling Wine Without Bottles: The Economy of Mind on the Global Net.

View Map + Bookmark Entry

The First Internet Cafe March 12 – March 13, 1994

Commissioned to develop an Internet event for "Towards the Aesthetics of the Future," an arts weekend at the Institute of Contemporary Arts (ICA) in London, Ivan Pope wrote a proposal outlining the concept of a café with Internet access from the tables. Pope's Cybercafe, the first Internet cafe, operated during the weekend event, March 12-13, 1994.

View Map + Bookmark Entry

Commercial Spaming Starts with the "Green Card Spam" April 12, 1994

Commercial spamming started when a pair of immigation lawyers from Phoenix, Arizona—Laurence Canter and Martha Siegel—used bulk Usenet postings to advertise immigration law services on April 12, 1994. This was called the "Green Card spam", after the subject line of the postings: "Green Card Lottery-Final One?"

"Canter and Siegel sent their advertisement, with the subject 'Green Card Lottery - Final One?', to at least 5,500 Usenet discussion groups, a huge number at the time. Rather than cross-posting a single copy of the message to multiple groups, so a reader would only see it once (considered a common courtesy when posting the same message to more than one group), they posted it as separate postings in each newsgroup, so a reader would see it in each group they read. Their internet service provider, Internet Direct, received so many complaints that its mail servers crashed repeatedly for the next two days; it promptly terminated their service. Despite the ire directed at the two lawyers, they posted another advertisement to 1,000 newsgroups in June 1994. This time, Arnt Gulbrandsen put together the first software "cancelbot" to trawl Usenet and kill their messages within minutes. The couple claimed in a December 1994 interview to have gained 1,000 new clients and 'made $100,000 off an ad that cost them only pennies' " (Wikipedia article on Lawrence Cantor and Martha Siegel, accessed 03-17-2012).

View Map + Bookmark Entry

Amazon.com is Founded July 1994 – July 1995

In July 1994 Jeff Bezos of Seattle, Washington, incorporated Amazon.com. The company originally promoted itself as "Earth's biggest book store." 

Amazon.com was very nearly called "Cadabra," as in "abracadabra." Bezos rapidly re-conceptualized the name when his lawyer misheard the word as "cadaver." Bezos instead named the business after the river for two reasons: to suggest scale, as the earth's biggest book store, and because website listings were often alphabetical at that time.

In July 1995 Amazon sold its first bookDouglas Hofstadter's Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought.

View Map + Bookmark Entry

The IBM Simon Personal Communicator: The First Smartphone August 16, 1994 – February 1995

Distributed in the United States only by BellSouth Cellular Corp between August 1994 and February 1995, the IBM Simon Personal Communicator, a handheld, touchscreen cellular phone and Personal Digital Assistant (PDA), was the first "smartphone," though the term was not coined until 1997. The phone operated within a 15 state network; about 50,000 Simons were sold.

"In addition to its ability to make and receive cellular phone calls, Simon was also able to send and receive faxese-mails and cellular pages. Simon featured many applications including an address book, calendar, appointment scheduler, calculator, world time clock, electronic note pad, handwritten annotations and standard and predictive stylus input screen keyboards" (Wikipedia article on IBM Simon, accessed 08-16-2014).

View Map + Bookmark Entry

"From Webspace to Cyberspace": A Pioneering Cultural and Historical Work December 1994

In December 1994 Kevin Hughes, then of Menlo Park, California, published privately version 1.0 of a pioneering cultural and historical work entitled From Webspace to Cyberspace.

"August 6, 1995 Announcing the release of "From Webspace to Cyberspace", a primer for the Age of the Internet. Originally released as an internal white paper at EIT in December 1994, it is now freely available. It is the sequel to "Entering the World-Wide Web: A Guide to Cyberspace".

It covers:

A brief history and overview of the World-Wide Web

An overview of today's online collaborative systems and descriptions of future work

An introduction to true cyberspace: what it means, media analyses, common myths, and applications of virtual environment technology

Future VRML issues, new tools, VRML browsers, world elements, and cyberspacial design guidelines

Descriptions of next generation VRML browsers and an analysis of navigation methods

Current and future trends in human-computer interface design, new environments, basic layouts, vision-related issues, and         input/output devices

Three-dimensional world design guidelines, with examples and never before seen 3D prototypes of collaborative spaces and universes developed at EIT

The future of the Internet: current problems with the World-Wide Web, new business models, new media, culture acceleration, and what cyberspace needs

It also includes the "History of Cyberspace", five
parallel timelines with almost 1,000 events that track:

Influential popular media and events of the last 500
years

The history of the Internet

The history of VR and VRML

The history of hypertext, hypermedia, and the World-Wide Web

The history of computers" (http://lists.w3.org/Archives/Public/www-talk/msg01457.html, accessed 12-04-2013).

In December 2013 version 1.1 of Hughes's paper, produced in July 1995, was available at this link.

View Map + Bookmark Entry

Probably the First For-Profit Social Networking Site 1995

In 1995 Randy Conrads founded Classmates.com. This was probably the first for-profit social networking website.

View Map + Bookmark Entry

There are Approximately 73,500 Servers; WWW is Generally Equated with the Internet 1995

During 1995 up to 700 new web servers were registered each day, and there were approximately 73,500 servers.

During this year WWW was generally equated with the Internet.

View Map + Bookmark Entry

"The Book and Beyond" Exhibition Takes Place April 7 – October 1, 1995

In its Design Now Room, 20th Century Gallery, The Victoria and Albert Museum in London held the exhibition The Book and Beyond. Electronic publishing and the art of the book. To accompany the exhibition from April 7 to October 1, 1995 the museum published a pamphlet. In 2001 they incorporated material in the pamphlet into a website.

The exhibition was divided into five sections:

1. Introduction

2. Artists' books and books as art

3. Artists' books and books as art

4. Electronic publications

"Various forms of "electronic publishing" - including videodiscs, "floppy books", CD-ROMs, and the Internet - have become increasingly evident in the 1980s and 1990s. Some electronic publications are based upon information which was previously available in a linear form, and they represent a natural progression from computer typesetting or video. Others have been conceived specifically to exploit the potential offered by the new media. The method of presentation is crucial to the success (or otherwise) of these publications, and designers and publishers are still learning to use the new technology."

5. Artists, computers and publishing

View Map + Bookmark Entry

The Beginning of the "Dot-Com Bubble" August 9, 1995

On August 9, 1995 Netscape Communications, Mountain View, California, had a very successful IPO. The stock, initially intended to be offered at $14 per share, was offered at double that for the IPO, and reached $75 on the first day of trading.

This was later considered the beginning of the "dot-com bubble."

View Map + Bookmark Entry

There are 100,000 Websites 1996

In 1996 there were 14,352,000 Internet hosts and 100,000 websites.

View Map + Bookmark Entry

A Declaration of the Independence of Cyberspace 1996

In response to the passage of the Telecommunications Act of 1996, John Perry Barlow wrote A Declaration of the Independence of Cyberspace.

View Map + Bookmark Entry

There are 1,000,000 Websites April 1997

IN 1997 there were one million websites on the Internet.

View Map + Bookmark Entry

Kasparov Loses to Deep Blue: The First Time a Human Chess Player Loses to a Computer Under Tournament Conditions May 11, 1997

On May 11, 1997 Gary Kasparov, sometimes regarded as the greatest chess player of all time, resigned 19 moves into Game 6 against Deep Blue, an IBM RS/6000 SP supercomputer capable of calculating 200 million chess positions per second. This was the first time that a human world chess champion lost to a computer under tournament conditions.

The event, which took place at the Equitable Center in New York, was broadcast live from IBM's website via a Java viewer, and became the world's record "Net event" at the time.

"Since the emergence of artificial intelligence and the first computers in the late 1940s, computer scientists compared the performance of these 'giant brains' with human minds, and gravitated to chess as a way of testing the calculating abilities of computers. The game is a collection of challenging problems for minds and machines, but has simple rules, and so is perfect for such experiments.

"Over the years, many computers took on many chess masters, and the computers lost.

"IBM computer scientists had been interested in chess computing since the early 1950s. In 1985, a graduate student at Carnegie Mellon University, Feng-hsiung Hsu, began working on his dissertation project: a chess playing machine he called ChipTest. A classmate of his, Murray Campbell, worked on the project, too, and in 1989, both were hired to work at IBM Research. There, they continued their work with the help of other computer scientists, including Joe Hoane, Jerry Brody and C. J. Tan. The team named the project Deep Blue. The human chess champion won in 1996 against an earlier version of Deep Blue; the 1997 match was billed as a 'rematch.'

"The champion and computer met at the Equitable Center in New York, with cameras running, press in attendance and millions watching the outcome. The odds of Deep Blue winning were not certain, but the science was solid. The IBMers knew their machine could explore up to 200 million possible chess positions per second. The chess grandmaster won the first game, Deep Blue took the next one, and the two players drew the three following games. Game 6 ended the match with a crushing defeat of the champion by Deep Blue." 

"The AI crowd, too, was pleased with the result and the attention, but dismayed by the fact that Deep Blue was hardly what their predecessors had imagined decades earlier when they dreamed of creating a machine to defeat the world chess champion. Instead of a computer that thought and played chess like a human, with human creativity and intuition, they got one that played like a machine, systematically evaluating 200 million possible moves on the chess board per second and winning with brute number-crunching force. As Igor Aleksander, a British AI and neural networks pioneer, explained in his 2000 book, How to Build a Mind:  

" 'By the mid-1990s the number of people with some experience of using computers was many orders of magnitude greater than in the 1960s. In the Kasparov defeat they recognized that here was a great triumph for programmers, but not one that may compete with the human intelligence that helps us to lead our lives.'

"It was an impressive achievement, of course, and a human achievement by the members of the IBM team, but Deep Blue was only intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better" (Gary Kasparov, "The Chess Master and the Computer," The New York Review of Books, 57, February 11, 2010).

View Map + Bookmark Entry

The Cluetrain Manifesto 1998

In 1998 Rick Levine, Christopher Locke, Doc Searles and David Weinberger published the Cluetrain Manifesto containing 95 theses, presumably, and possibly grandiosely, in the tradition of Martin Luther.

The manifesto was first published online, followed in December 1999 by a printed book issued by Perseus Books in Cambridge, Massachusetts.

“A powerful global conversation has begun.” “Through the Internet, people are discovering and inventing new ways to share relevant knowledge with blinding speed. As a direct result, markets are getting smarter--and getting smarter faster than most companies.” “Markets are conversations.”

View Map + Bookmark Entry

"You've Got Mail", a Movie about Love, Email, and the Book Trade 1998

You've Got Mail, an American romantic comedy film set in New York City starring Tom Hanks and Meg Ryan, was released by Warner Brothers in 1998. The film dramatized a romantic relationship that develops over email, featuring AOL's "You've got mail" slogan in product placement. Paralleling this film about computers and society was the film's subplot of the forced closure of a small independent bookshop by competition from a big-box chain bookstore — thus You've Got Mail was not only a film about computers and romance, but also a commentary about the changing face of the book trade.

View Map + Bookmark Entry

The Digital Millenium Copyright Act October 12, 1998

On October 12, 1998 the U.S. Congress passed the Digital Millenium Copyright Act.

View Map + Bookmark Entry

Computers Have Not Caused a Reduction in Paper Usage or Printing 1999

In 1999 it required about 756,000,000 trees to produce the world’s annual paper supply.

“The UNESCO Statistical Handbook for 1999 estimates that paper production provides 1,510 sheets of paper per inhabitant of the world on average, although in fact the inhabitants of North America consume 11,916 sheets of paper each (24 reams), and inhabitants of the European Union consume 7,280 sheets of paper annually (15 reams), according to the ENST report. At least half of this paper is used in printers and copiers to produce office documents.”

Thus computers have not reduced paper usage; if anything, because nearly everyone who owns a personal computer also owns a printer, and more and more people acquire computers every year, the amount of printing being done continues to increase.

View Map + Bookmark Entry

The Matrix: Referencing Cyberpunk and Hacker Cultures 1999

The Matrix, a 1999 science fiction-martial arts-action film,

"describes a future in which reality perceived by humans is actually the Matrix: a simulated reality created by sentient machines in order to pacify and subdue the human population while their bodies' heat and electrical activity are used as an energy source. Upon learning this, computer programmer "Neo" is drawn into a rebellion against the machines. The film contains many references to the cyberpunk and hacker subcultures; philosophical and religious ideas; and homages to Alice's Adventures in Wonderland, Hong Kong action cinema, Spaghetti Westerns, and Japanese animation" (Wikipedia article on The Matrix, accessed 12-23-2008).

View Map + Bookmark Entry

"The Internet of Things" 1999

In 1999 British technology pioneer Kevin Ashton, co-founder of the Auto-ID Labs at MIT, invented the term "The Internet of Things" to describe a system where the Internet is connected to the physical world via ubiquitous sensors, including RFID (Radio-frequency identification).

"Ashton's original definition was: 'Today computers—and, therefore, the Internet—are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the Internet were first captured and created by human beings—by typing, pressing a record button, taking a digital picture or scanning a bar code. Conventional diagrams of the Internet ... leave out the most numerous and important routers of all - people. The problem is, people have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world. And that's a big deal. We're physical, and so is our environment ... You can't eat bits, burn them to stay warm or put them in your gas tank. Ideas and information are important, but things matter much more. Yet today's information technology is so dependent on data originated by people that our computers know more about ideas than things. If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best. The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.' "(Wikipedia article on Internet of Things, accessed 01-07-2013).

View Map + Bookmark Entry

The Napster Sharing Service for MP3 Files is Launched June 1, 1999

On June 1, 1999 American computer programmer and entrepreneur Shawn Fanning released the Napster file sharing service for MP3 files from his headquarters in Hull, Massachusetts. After Napster's early explosive success Fanning moved the company to San Mateo, California. "The original company ran into legal difficulties over copyright infringement, ceased operations and was eventually acquired by Roxio. In its second incarnation Napster became an online music store until it merged with Rhapsody on 1 December 2011" (Wikipedia article on Napster, accessed 03-18-2012).

"It [Napster] was the first of the massively popular peer-to-peer file sharing systems, although it was not fully peer-to-peer since it used central servers to maintain lists of connected systems and the files they provided, while actual transactions were conducted directly between machines. Although there were already media which facilitated the sharing of files across the Internet, such as IRC, Hotline, and USENET, Napster specialized exclusively in music in the form of MP3 files and presented a friendly user-interface. The result was a system whose popularity generated an enormous selection of music to download."

View Map + Bookmark Entry

2000 – 2005

The Size of the Internet in 2000 2000

At some point in 2000 there were 72,398,092 Internet hosts and 9,950,491 websites.

Web size estimates by Inktomi at this time surpassed 1 billion pages that could be indexed.

View Map + Bookmark Entry

Code and Other Laws of Cyberspace 2000

The front cover of Code and Other Laws of Cyberspace by Lawrence Lessig.

 

Lawrence Lessig

In 2000 Lawrence Lessig of Stanford Law School published Code and Other Laws of Cyberspace, in which he argued:

"that cyberspace changes not only the technology of copying but also the power of law to protect against illegal copying (125-127). He explores the notion that computer code may regulate conduct in much the same way that legal codes do. He goes so far as to argue that code displaces the balance in copyright law and doctrines such as fair use (135). If it becomes possible to license every aspect of use (by means of trusted systems created by code), then no aspect of use would have the protection of fair use(136). The importance of this side of the story is generally underestimated and, as the examples will show, very often, code is even (only) considered as an extra tool to fight against 'unlimited copying'."

View Map + Bookmark Entry

Climax of the Dot-Com Bubble March 10, 2000

The Netscape logo

The dot-com bubble, thought to have begun with the IPO of Netscape on August 9, 1995, reached its climax on March 10, 2000 with the NASDAQ peaking at 5132.52.

After this date the dot-com bubble began to burst.

View Map + Bookmark Entry

There are 20,000,000 Websites on the Internet. September 2000

In September 2000 here were 20,000,000 websites on the Internet; the number had doubled since February of 2000.

View Map + Bookmark Entry

Safeguarding Internet Security in China December 28, 2000

On December 28, 2000 the 19th Session of the National People's Congress of China adopted the Decision of the Standing Committee of NPC Regarding the Safeguarding of Internet Security.

View Map + Bookmark Entry

The Future of Ideas: The Fate of Commons in a Connected World 2001

Lawrence Lessig

The cover art for The Future of Ideas : the fate of commons in a connected world by Lawrence Lessig

In 2000 Lawrence Lessig, then a professor at Stanford Law School, published The Future of Ideas: The fate of commons in a connected world, in which he argued that while

". . . copyright helps artists get rewarded for their work, . . . a copyright regime that is too strict and grants copyright for too long a period of time (i.e. the current US legal climate) can destroy innovation, as the future movements by corporate interests to promote longer and tighter protection ofintellectual property in three layers: the code layer, the content layer, and the physical layer. . . . In the end, he stresses the importance of existing works entering the public domain in a reasonably short period of time, as the founding fathers intended."

View Map + Bookmark Entry

The Film: "A. I. Artificial Intelligence" 2001

Steven Spielberg

The movie poster for A.I. Artificial Intelligence

Stanley Kubrick

In 2001 American director, screen writer and film producer Steven Spielberg directed, co-authored and produced, through DreamWorks and Amblin Entertainment, the science fiction film A.I. Artificial Intelligence, telling the story of David, an android robot child programmed with the ability to love and to dream. The film explored the hopes and fears involved with efforts to simulate human thought processes, and the social consequences of creating robots that may be better than people at specialized tasks.

The film was a 1970s project of Stanley Kubrick, who eventually turned it over to Spielberg. The project languished in development hell for nearly three decades before technology advanced sufficiently for a successful production. The film required enormously complex puppetry, computer graphics, and make-up prosthetics, which are well-described and explained in the supplementary material in the two-disc special edition of the film issued on DVD in 2002.

View Map + Bookmark Entry

Foundation of the Oxford Internet Institute 2001

In 2001 the Oxford Internet Institute (OII) was founded at the University of Oxford for the study of the social implications of the Internet. 

When I wrote this entry in November 2013 it remained the only major department in a top-ranked international university to offer multi-disciplinary social science degree programs focussing on the Internet, including a one-year MSc in Social Science of the Internet and a DPhil in Information, Communication and the Social Sciences

View Map + Bookmark Entry

The Wikipedia Begins January 15, 2001

The Wikipedia logo

Jimmy Wales

Larry Sanger

On January 15, 2001 American entrepreneur Jimmy Wales, American philosopher Larry Sanger, and others founded the Wikipedia, the Free Encyclopedia, as an English language project.

"In its first year, Wikipedia generated 20,000 articles, and had acquired 200 regular volunteers working to add more (this compares with the 55,000 articles in the Columbia [Encyclopedia], all subject to rigorous standards of editing and fact-checking, though this in itself was a small-scale enterprise compared to the behemoths of the industry like the Encyclopaedia Britannica, whose 1989 edition covered 400,000 different topics). By the end of 2002, the number of entries on Wikipedia had more than doubled. But it was only in 2003, once it became apparent that there was nothing to stop it continuing to double in size (which is what it did), that Wikipedia started to attract attention outside the small tech-community that had noticed its launch. In early 2004, there were 188,000 articles; by 2006, 895,000. In 2007 there were signs that the pace of growth might start to level off, and only in 2008 did it begin to look like the numbers might be stabilising. The English-language version of Wikipedia currently has more than 2,870,000 entries, a number that has increased by 500,000 over the last 12 months. However, the English-language version is only one of more than 250 different versions in other languages. German, French, Italian, Polish, Dutch and Japanese Wikipedia all have more than half a million entries each, with plenty of room to add. Xhosa Wikipedia currently has 110. Meanwhile, the Encyclopaedia Britannica had managed to increase the number of its entries from 400,000 in 1989 to 700,000 by 2007" (Runciman, "Like Boiling a Frog," Review of "The Wikipedia Revolution" by Andrew Lih, London Review of Books, 28 May 2009, accessed 05-23-2009).

View Map + Bookmark Entry

An Injunction Against Napter to Prevent Trading of Copyrighted Music March 5, 2001

The Napster logo

On March 5, 2001 the Ninth Circuit Court, San Francisco, issued an injunction ordering Napster to prevent the trading of copyrighted music on its network.

View Map + Bookmark Entry

The Size of the Internet in 2002 2002

In 2002 there were 147,344,723 Internet hosts and 36,689,008 websites (Cisco). The estimated number of Internet users worldwide was about 600,000,000.

View Map + Bookmark Entry

"Minority Report": The Movie 2002

Steven Spielberg

The movie poster for Minority Report

The cover art for Minority Report by Philip Dick

Philip Dick

Steven Spielberg directed the science fiction 2002 film Minority Report, loosely based on the short story, "The Minority Report" by Philip K. Dick.

"It is set primarily in Washington, D.C. and Northern Virginia in the year 2054, where "Precrime", a specialized police department, apprehends criminals based on foreknowledge provided by three psychics called 'precogs'. The cast includes Tom Cruise as Precrime officer John Anderton, Colin Farrell as Department of Justice agent Danny Witwer, Samantha Morton as the senior precog Agatha, and Max von Sydow as Anderton's superior Lamar Burgess. The film has a distinctive look, featuring desaturated colors that make it almost resemble a black-and-white film, yet the blacks and shadows have a high contrast, resembling film noir."

"Some of the technologies depicted in the film were later developed in the real world – for example, multi-touch interfaces are similar to the glove-controlled interface used by Anderton. Conversely, while arguing against the lack of physical contact in touch screen phones, PC Magazine's Sascha Segan argued in February 2009, 'This is one of the reasons why we don't yet have the famous Minority Report information interface. In that movie, Tom Cruise donned special gloves to interact with an awesome PC interface where you literally grab windows and toss them around the screen. But that interface is impractical without the proper feedback—without actually being able to feel where the edges of the windows are' " (Wikipedia article on Minority Report [film] accessed 05-25-2009).

The two-disc special edition of the film issued on DVD in 2002 contained excellent supplementary material on the special digital effects.

View Map + Bookmark Entry

Google News, a Free News Aggregator, Begins September 2002

Krishna Bharat, a research scientist at Google, created Google News in 2002 in the aftermath of the September 11, 2001 attacks in order to keep himself abreast of new developments. Google News watches more than 4500 worldwide news sources, aggregating content from more than 25,000 publishers. For the English language it covers more than 4500 sites; for other languages, fewer sites are covered.

According to the Wikipedia, different versions of the aggregator were available for more than 60 regions in 28 languages, as of March 15, 2012, with continuing development ongoing. As of January 2013, service in the following languages was offered: Arabic,  Cantonese,  Chinese, Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Hungarian, Italian,Japanese, Korean, Malayalam, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Tamil, Telugu,Thai, Turkish, Ukrainian, and Vietnamese.

"As a news aggregator site, Google uses its own software to determine which stories to show from the online news sources it watches. Human editorial input does come into the system, however, in choosing exactly which sources Google News will pick from. This is where some of the controversy over Google News originates, when some news sources are included when visitors feel they don't deserve it, and when other news sources are excluded when visitors feel they ought to be included. . . .

"The actual list of sources is not known outside of Google. The stated information from Google is that it watches more than 4,500 English-language news sites. In the absence of a list, many independent sites have come up with their own ways of determining Google's news sources . . . ." (Wikipedia article on Google News, accessed 10-24-2014).

View Map + Bookmark Entry

How Much Information? 2003

The University of California Berkeley logo

How much information 2003: The research project from the University of California at Berkeley, first published on the web in 2000, updated its findings in 2003. Strikingly it estimated that each person in the U.S. generated 800 MB of recorded information. This was more than three times the data per capita that the same research project calculated was being produced in 2000. The remaining data in this entry of the database is quoted from the 2003 website:

"How much new information is created each year? Newly created information is stored in four physical media -- print, film, magnetic and optical --and seen or heard in four information flows through electronic channels -- telephone, radio and TV, and the Internet. This study of information storage and flows analyzes the year 2002 in order to estimate the annual size of the stock of new information recorded in storage media, and heard or seen each year in information flows. Where reliable data was available we have compared the 2002 findings to those of our 2000 study (which used 1999 data) in order to describe a few trends in the growth rate of information.

  1. Print, film, magnetic, and optical storage media produced about 5 exabytes of new information in 2002. Ninety-two percent of the new information was stored on magnetic media, mostly in hard disks.
    • How big is five exabytes? If digitized with full formatting, the seventeen million books in the Library of Congress contain about 136 terabytes of information; five exabytes of information is equivalent in size to the information contained in 37,000 new libraries the size of the Library of Congress book collections.
    • Hard disks store most new information. Ninety-two percent of new information is stored on magnetic media, primarily hard disks. Film represents 7% of the total, paper 0.01%, and optical media 0.002%.
    • The United States produces about 40% of the world's new stored information, including 33% of the world's new printed information, 30% of the world's new film titles, 40% of the world's information stored on optical media, and about 50% of the information stored on magnetic media.
    • How much new information per person? According to the Population Reference Bureau, the world population is 6.3 billion, thus almost 800 MB of recorded information is produced per person each year. It would take about 30 feet of books to store the equivalent of 800 MB of information on paper.
  2. We estimate that the amount of new information stored on paper, film, magnetic, and optical media has about doubled in the last three years.
    • Information explosion? We estimate that new stored information grew about 30% a year between 1999 and 2002.
    • Paperless society? The amount of information printed on paper is still increasing, but the vast majority of original information on paper is produced by individuals in office documents and postal mail, not in formally published titles such as books, newspapers and journals.
  3. Information flows through electronic channels -- telephone, radio, TV, and the Internet -- contained almost 18 exabytes of new information in 2002, three and a half times more than is recorded in storage media. Ninety eight percent of this total is the information sent and received in telephone calls - including both voice and data on both fixed lines and wireless.
    • Telephone calls worldwide � on both landlines and mobile phones � contained 17.3 exabytes of new information if stored in digital form; this represents 98% of the total of all information transmitted in electronic information flows, most of it person to person.
    • Most radio and TV broadcast content is not new information. About 70 million hours (3,500 terabytes) of the 320 million hours of radio broadcasting is original programming. TV worldwide produces about 31 million hours of original programming (70,000 terabytes) out of 123 million total hours of broadcasting.
    • The World Wide Web contains about 170 terabytes of information on its surface; in volume this is seventeen times the size of the Library of Congress print collections.
    • Instant messaging generates five billion messages a day (750GB), or 274 Terabytes a year.
    • Email generates about 400,000 terabytes of new information each year worldwide.
    • P2P file exchange on the Internet is growing rapidly. Seven percent of users provide files for sharing, while 93% of P2P users only download files. The largest files exchanged are video files larger than 100 MB, but the most frequently exchanged files contain music (MP3 files).
    • How we use information. Published studies on media use say that the average American adult uses the telephone 16.17 hours a month, listens to radio 90 hours a month, and watches TV 131 hours a month. About 53% of the U.S. population uses the Internet, averaging 25 hours and 25 minutes a month at home, and 74 hours and 26 minutes a month at work � about 13% of the time."
View Map + Bookmark Entry

"Second Life" is Launched 2003

Linden Lab logo

An image from the Second Life game by Linden Lab

In 2003 Linden Lab of San Francisco, California, made publicly available the privately owned, partly subscription-based, virtual world called Second Life.

View Map + Bookmark Entry

The First Attempt to "Establish the Geneology of the Computer as an Expressive Medium" in a Single Volume 2003

In 2003 computer scientist Noah Wardrip-Fruin of the University of California, Santa Cruz, and Nick Montfort, professor of digital media at MIT issued The New Media Reader, with introductions by Janet H. Murray of Georgia Institute of Technology and Lev Manovich, then professor at the University of California at San Diego.

This anthology represented the first attempt to present in a single volume significant representative documents covering the wide range of digital media. As Janet Murray wrote in her introduction, "This a landmark volume, marking the first comprehensive effort at establishing the geneology of the computer as an expressive medium." The 823-page physical volume, designed by Michael Crumpton, and published by MIT Press, was innovative several ways: most notably through the use of special symbols in the text and the margins that directed the reader to cross-references throughout the book— a kind of physical hypertext. The anthology also contained a CD-ROM containing programs, videos, games, interactive fiction and games.

View Map + Bookmark Entry

HIPAA: Privacy of Medical Records, Goes into Effect April 14, 2003

On April 14, 2003 the Privacy Rule of the Health Insurance Portability and Accountability Act (HIPAA) went into effect.

"The Health Insurance Portability and Accountability Act (HIPAA) was enacted by the U.S. Congress in 1996. According to the Centers for Medicare and Medicaid Services (CMS) website, Title I of HIPAA protects health insurance coverage for workers and their families when they change or lose their jobs. Title II of HIPAA, known as the Administrative Simplification (AS) provisions, requires the establishment of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. It helps people keep their information private.

"The Administration Simplification provisions also address the security and privacy of health data. The standards are meant to improve the efficiency and effectiveness of the nation's health care system by encouraging the widespread use of electronic data interchange in the U.S. health care system."

"The HIPAA Privacy Rule regulates the use and disclosure of certain information held by 'covered entities' (generally, health care clearinghouses, employer sponsored health plans, health insurers, and medical service providers that engage in certain transactions.)  It establishes regulations for the use and disclosure of Protected Health Information (PHI). PHI is any information held by a covered entity which concerns health status, provision of health care, or payment for health care that can be linked to an individual. This is interpreted rather broadly and includes any part of an individual's medical record or payment history.

"Covered entities must disclose PHI to the individual within 30 days upon request. They also must disclose PHI when required to do so by law, such as reporting suspected child abuse to state child welfare agencies.

"A covered entity may disclose PHI to facilitate treatment, payment, or health care operations, or if the covered entity has obtained authorization from the individual. However, when a covered entity discloses any PHI, it must make a reasonable effort to disclose only the minimum necessary information required to achieve its purpose.

"The Privacy Rule gives individuals the right to request that a covered entity correct any inaccurate PHI. It also requires covered entities to take reasonable steps to ensure the confidentiality of communications with individuals. . . .

"The Privacy Rule requires covered entities to notify individuals of uses of their PHI. Covered entities must also keep track of disclosures of PHI and document privacy policies and procedures. They must appoint a Privacy Official and a contact person responsible for receiving complaints and train all members of their workforce in procedures regarding PHI.

"An individual who believes that the Privacy Rule is not being upheld can file a complaint with the Department of Health and Human Services Office for Civil Rights (OCR). However, according to the Wall Street Journal, the OCR has a long backlog and ignores most complaints. 'Complaints of privacy violations have been piling up at the Department of Health and Human Services. Between April 2003 and Nov. 30, the agency fielded 23,896 complaints related to medical-privacy rules, but it has not yet taken any enforcement actions against hospitals, doctors, insurers or anyone else for rule violations. A spokesman for the agency says it has closed three-quarters of the complaints, typically because it found no violation or after it provided informal guidance to the parties involved' " (Wikipedia article on Health Insurance Portability and Accountability Act, accessed 08-05-2009).

View Map + Bookmark Entry

MySpace is Founded August 2003

Brad Greenspan

Tom Anderson, aka "Tom of Myspace"

In August 2003 Brad Greenspan and eUniverse founded MySpace in Santa Monica, California.

View Map + Bookmark Entry

The World Summit on the Information Society December 10 – December 12, 2003

World Summit on the Information Society logo

The World Summit on the Information Society

The World Summit on the Information Society (WSIS) convened its first meeting in Geneva, Switzerland from December 10-12, 2003.

View Map + Bookmark Entry

The First U.S. Standards for Sending Commercial E-Mail December 16, 2003

President George W. Bush, the 43rd president of the United States

The Federal Trade Commission logo

On December 16, 2003 The CAN-SPAM Act of 2003 was signed into law by President George W. Bush, establishing the United States' first national standards for the sending of commercial e-mail and requiring the Federal Trade Commission (FTC) to enforce its provisions.

"The acronym CAN-SPAM derives from the bill's full name: Controlling the Assault of Non-Solicited Pornography And Marketing Act of 2003. This is also a play on the usual term for unsolicited email of this type, spam. The bill was sponsored in Congress by Senators Conrad Burns and Ron Wyden.

"The CAN-SPAM Act is commonly referred to as the "You-Can-Spam" Act because the bill explicitly legalizes most e-mail spam. In particular, it does not require e-mailers to get permission before they send marketing messages. It also prevents states from enacting stronger anti-spam protections, and prohibits individuals who receive spam from suing spammers. The Act has been largely unenforced, despite a letter to the FTC from Senator Burns, who noted that "Enforcement is key regarding the CAN-SPAM legislation." In 2004 less than 1% of spam complied with the CAN-SPAM Act of 2003.

"The law required the FTC to report back to Congress within 24 months of the effectiveness of the act.[4] No changes were recommended. It also requires the FTC to promulgate rules to shield consumers from unwanted mobile phone spam. On December 20, 2005 the FTC reported that the volume of spam has begun to level off, and due to enhanced anti-spam technologies, less was reaching consumer inboxes. A significant decrease in sexually-explicit e-mail was also reported.

"Later modifications changed the original CAN-SPAM Act of 2003 by (1) Adding a definition of the term "person"; (2) Modifying the term "sender"; (3) Clarifying that a sender may comply with the act by including a post office box or private mailbox and (4) Clarifying that to submit a valid opt-out request, a recipient cannot be required to pay a fee, provide information other than his or her email address and opt-out preferences, or take any other steps other than sending a reply email message or visiting a single page on an Internet website" (Wikipedia article on CAN-SPAM Act of 2003, accessed 01-19-2010).

View Map + Bookmark Entry

800,000,000 People are Using the Internet 2004

In 2004 800,000,000 people in the world were using the Internet.

View Map + Bookmark Entry

2,350,000 U.S. Students in Online Learning 2004

According to Sloan-C, A Consortium of Institutions and Organizations Committed to Quality Online Education, 2.35 million students were enrolled in online learning in the United States during 2004.

View Map + Bookmark Entry

Facebook February 4, 2004

Mark Zuckerberg

The original homepage for Thefacebook

The current facebook logo

On February 4, 2004, while a student at Harvard, Mark Zuckerberg founded Thefacebook.com.

The name of the site was later simplified to Facebook. Membership was initially limited to Harvard students. but then expanded to other colleges in the Ivy League. Facebook expanded further to include any university student, then high school students, and, finally, to anyone aged 13 and over. 

♦ In August 2013, after Facebook had over one billion users, a timeline entitled The Evolution of Facebook was available from The New York Times.

View Map + Bookmark Entry

There are 50,000,000 Websites on the Internet May 2004

In May 2004 there were 50,000,000 websites on the Internet.

View Map + Bookmark Entry

8,000,000 U.S. Blogs November 2004

The Pew Internet and American Life Project logo

According to the Pew Internet and American Life Project, by November 2004 8,000,000 American adults said they had created blogs.

View Map + Bookmark Entry

2005 – 2010

Use of Internet in China in 2005 2005

By Spring of 2005 it was estimated that over 100,000,000 people in China used the Internet.

View Map + Bookmark Entry

"From Gutenberg to the Internet" 2005

In 2005 the author/editor of this database, Jeremy Norman, issued From Gutenberg to the Internet: A Sourcebook on the History of Information Technology.

This printed book was the first anthology of original publications, reflecting the origins of the various technologies that converged to form the Internet. Each reading is introduced by the editor.

View Map + Bookmark Entry

"Last Child in the Woods" : Exploration of Nature Versus Exposure to Media in Childhood 2005

In 2005 American journalist and non-fiction writer Richard Louv published Last Child in the Woods: Saving Our Children From Nature-Deficit DisorderIn this book Louv studied the relationship of children and the natural world in current and historical contexts, creating the term “nature-deficit disorder” to describe possible negative consequences to individual health and the social fabric as children move indoors as a result of immersion in television, Internet, and computer games, and away from physical contact with the natural world – particularly unstructured, solitary experience.

Louv cited research pointing to attention disorders, obesity, a dampening of creativity and depression as problems associated with a nature-deficient childhood. He amassed information on the subject from practitioners of many disciplines to make his case, and is commonly credited with helping to inspire an international movement to reintroduce children to nature.

I first learned about Louv's book in a lecture by paleontologist, educator, and television broadcaster Scott D. Sampson held at Marin Academy in San Rafael, California on October 26, 2011. Sampson's lecture was the first in a science lecture series organized by my son, Max, in his junior year in high school. An extremely engaging speaker, Sampson uses the electronic media to promote the disengagement from media, and active exploration of nature, especially in childhood. He also promotes the use of social media in promoting scientific exploration of nature by the individual in each person's locality.

View Map + Bookmark Entry

"Broadcast Yourself" : YouTube is Founded February 2005

The YouTube logo

Steven Chen

Chad Hurley

Jawed Karim

In February 2005 three former employees of Paypal — Steven Chen, Chad Hurley, and Jawed Karim — founded the video sharing website, YouTube.  Its first headquarters were above a pizzeria and Japanese restaurant in San Mateo, California. Most of the content on YouTube is uploaded by individuals, but media corporations including CBS, the BBCVevoHulu, and other organizations offer some of their material via YouTube, as part of the YouTube partnership program.

View Map + Bookmark Entry

Development and State Control of the Chinese Internet April 14, 2005

Xiao Qiang

The U. S.- China Economic and Security Review Commission (USCC.gov) issued the report of Xiao Qiang, University of California, Berkeley, on The Development and the State Control of the Chinese Internet. 

View Map + Bookmark Entry

Wikimania!: The First International Wikimedia Conference Takes Place August 4 – August 8, 2005

The Wikimedia logo

A simulated Wikimania banner in Frankfurt

Wikimania 2005: The First International Wikimedia Conference was held in Frankfurt am Main from August 4-8, 2005.

View Map + Bookmark Entry

The Amazon Mechanical Turk November 2, 2005

Wolfgang von Kempelen

The Amazon Mechanical Turk logo

A diagram explanation of Amazon's Mechanical Turk 

Alluding to Wolfgang von Kempelen's eighteenth-century automaton, The Turk, which purported to automate chessplaying when this was impossible, on November 2, 2005 Amazon.com launched the Amazon Mechanical Turk:

"a crowdsourcing marketplace that enables computer programs to co-ordinate the use of human intelligence to perform tasks which computers are unable to do."

This was  the first business application using Collaborative Human Interpreter, a programming language "designed for collecting and making use of human intelligence in a computer program. One typical usage is implementing impossible-to-automate functions."

View Map + Bookmark Entry

Massively Distributed Collaboration November 9, 2005

Mitchell Kapor

At the UC Berkeley School of Information, on November 9, 2005  Mitchell Kapor delivered an address entitled Content Creation by Massively Distributed Collaboration.

"The sudden and unexpected importance of the Wikipedia, a free online encyclopedia created by tens of thousands of volunteers and coordinated in a deeply decentralized fashion, represents a radical new modality of content creation by massively distributed collaboration. This talk will discuss the unique principles and values which have enabled the Wikipedia community to succeed and will examine the intriguing prospects for application of these methods to a broad spectrum of intellectual endeavors."

View Map + Bookmark Entry

The Highest Price Paid for a Domain Name January 16, 2006

Gary Kremen

Having initially registered the domain name for free, after which he temporarily lost it to a con man, Gary Kremen won a lawsuit and sold Sex.com for Boston-based Escom LLC $14,000,000 or  "$15 million in cash and stock." This was the highest price obtained for a domain name at the time. Maybe ever?

View Map + Bookmark Entry

File-Sharing Exceeds Sales of Digital Music Downloads January 22, 2006

In 2006 free file-sharing of digital music on the web exceeded the sale of digital music downloads by many fold:

"Total music sales - including online - are off some 20 percent from five years ago. Songs traded freely over unlicensed Internet sites swamp the number of legal sales by thousands to one."

View Map + Bookmark Entry

The Word Crowdsourcing is Coined June 2006

Jeff Howe

Cover art for Crowdsourcing by Jeff Howe

In an article published in Wired in June 2006 Jeff Howe coined the term crowdsourcing

"for the act of taking a job traditionally performed by an employee or contractor, and outsourcing it to an undefined, generally large group of people, in the form of an open call. For example, the public may be invited to develop a new technology, carry out a design task, refine an algorithm or help analyze large amounts of data."

View Map + Bookmark Entry

100,000,000 Users Within Three Years August 9, 2006

The Myspace login page layout from 2006

In 2006 MySpace, founded in August 2003, had 100,000,000 users.

View Map + Bookmark Entry

"The Document in the Digital Era" by Web-Footed September 2006

Le Document a la Lumiere du Numerique (The Document in the Digital Era) was published in print in September 2006 by a collaborating group of information researchers under the collective pseudonym of Roger T. Pedauque. The surname of the pseudonym meant "web-footed."

View Map + Bookmark Entry

More than 100,000,000 Websites November 1, 2006

The Netcraft logo

In November 2006 there were more than 100 million websites on the Internet. Between January and November of this year 27.4 million sites were added to the web. (According to Netcraft.com there were 101,435,253 sites on the Internet.)

View Map + Bookmark Entry

Journalistic Acknowledgment of the Significance of Social Networking on the Internet December 16, 2006

The cover of Time Magazine when the magazine named "You" as the person of the year

Time Magazine issue of December 26, 2006 named "You" as the Person of the Year, reflecting the growing importance of social networking on the Internet:

"The "Great Man" theory of history is usually attributed to the Scottish philosopher Thomas Carlyle, who wrote that 'the history of the world is but the biography of great men.' He believed that it is the few, the powerful and the famous who shape our collective destiny as a species. That theory took a serious beating this year.

"To be sure, there are individuals we could blame for the many painful and disturbing things that happened in 2006. The conflict in Iraq only got bloodier and more entrenched. A vicious skirmish erupted between Israel and Lebanon. A war dragged on in Sudan. A tin-pot dictator in North Korea got the Bomb, and the President of Iran wants to go nuclear too. Meanwhile nobody fixed global warming, and Sony didn't make enough PlayStation3s.

"But look at 2006 through a different lens and you'll see another story, one that isn't about conflict or great men. It's a story about community and collaboration on a scale never seen before. It's about the cosmic compendium of knowledge Wikipedia and the million-channel people's network YouTube and the online metropolis MySpace. It's about the many wresting power from the few and helping one another for nothing and how that will not only change the world, but also change the way the world changes."

View Map + Bookmark Entry

Information is Expanding 10X Faster than Any Product on this Planet February 2007

Kevin Kelly

In February 2007 Kevin Kelly wrote in Wired Magazine:

"Information is expanding 10 times faster than any product on this planet - manufactured or natural. According to Hal Varian, an economist at UC Berkeley and a consultant to Google, worldwide information is increasing at 66 percent per year - approaching the rate of Moore's Law - while the most prolific manufactured stuff - paper, let’s say, or steel - averages only as much as 7 percent annually."

View Map + Bookmark Entry

In 2007 There Were 12,000,000 U.S. Blogs February 2007

The Pew Internet and American Life Project logo

The Pew Research Center logo

According to the Pew Internet and American Life Project, a product of the PewResearch Center, Washington, D.C.,  in February 2007 about 12 million Americans maintained a blog.

View Map + Bookmark Entry

Steve Jobs Introduces the iPhone June 29, 2007

The iPhone 3G

On June 29, 2007 Apple introduced the iPhone, an internet-connected multimedia smartphone with a virtual keypad and a virtual keyboard.

View Map + Bookmark Entry

The CNN/ YouTube Presidential Debates: The First Internet to Television Debate Partnership July 23 – November 28, 2007

The CNN/YouTube presidential debates, the first web-to-television debate partnership, were a series of televised debates in which United States presidential hopefuls fielded questions submitted through the video sharing site YouTube. They were conceived by David Bohrman, then Washington Bureau Chief of CNN, and Steve Grove, then Head of News and Politics at YouTube. YouTube was then a new platform on the political scene, rising to prominence in the 2006 midterm elections after Senator George Allen's Macaca Controversy, in which the Senator was captured calling his opponent Jim Webb's campaign worker a "Macaca" on video, which went viral on YouTube and damaged a campaign that narrowly lost at the polls. Media companies were looking for new ways to harness the possibilities of web video and YouTube was looking for opportunities to give its users access to the national political stage, so Bohrman and Grove formed a unique partnership in the CNN/YouTube DebatesThe Democratic Party installment took place in Charleston, South Carolina and aired on July 23, 2007. The Republican Party installment took place in St. Petersburg, Florida and aired on November 28, 2007. 

View Map + Bookmark Entry

The World Wide Telecom Web for Illiterate Populations August 2007

Arun Kumar

A diagram of the World Wide Telecom Web, also known as "Spoken Web"

In August 2007 Arun Kumar and others at IBM Research - India, New Delhi,  published "WWTW: The World Wide Telecom Web", a voice-driven Internet designed for illiterate populations:

"our vision of a voice-driven ecosystem parallel to that of the WWW. WWTW is a network of interconnected voice sites that are voice driven applications created by users and hosted in the network. It has the potential to enable the underprivileged population to become a part of the next generation converged networked world. We present a whole gamut of existing technology enablers for our vision as well as present research directions and open challenges that need to be solved to not only realize a WWTW but also to enable the two Webs to cross leverage each other."

View Map + Bookmark Entry

About 200 Million People in the U.S. Have Broadband Connections May 2008

By 2008 broadband technologies had spread to more than 90% of all residential Internet connections in the United States.

"When one considers a Nielsen’s study conducted in June 2008, which estimated the number of U.S. Internet users as 220,141,969, one can calculate that there are presently about 199 million people in the United States utilizing broadband technologies to surf the Web" (Wikipedia article on Internet marketing, accessed 05-10-2009).

View Map + Bookmark Entry

"Computer Criminal Number One" August 5, 2008 – March 26, 2010

On August 6, 2008 United States District Court, District of Massachusetts in Boston indicted Albert Gonzalez,  a/k/a cumbajohny, a/k/a cj, a/k/a UIN 20167996, a/k/a UIN 476747, a/ak/a soupnazi, a/k/a segvec, a/k/a klngchilli, a/k/a stanozololz, for masterminding a crime ring to use malware to steal and sell more than 170,000,000 credit card and ATM numbers from retail stores during 2005 to 2007. 

"On August 28, 2009, his [Gonzalez's] attorney filed papers with the United States District Court for the District of Massachusetts in Boston indicating that he would plead guilty to all 19 charges in the U.S. v. Albert Gonzalez, 08-CR-10223, case (the TJ Maxx case). According to reports this plea bargain would "resolve" issues with the New York case of U.S. v. Yastremskiy, 08-CR-00160 in United States District Court for the Eastern District of New York (the Dave and Busters case).

"Gonzalez could serve a term of 15 years to 25 years. He would forfeit more than $1.65 million, a condominium in Miami, a blue 2006 BMW 330i automobile, IBM and Toshiba laptop computers, a Glock 27 firearm, a Nokia cell phone, a Tiffany diamond ring and three Rolex watches. "

"His sentence would run concurrent with whatever comes out of the case in the United States District Court for the District of New Jersey (meaning that he would serve the longest of the sentences he receives)" (Wikipedia article on Albert Gonzalez, accessed 01-18-2010).

On March 26, 2010 U.S. District Court Judge Douglas P. Woodcock sentenced Gonzalez to twenty years in prison with three twenty year sentences running concurrently.

"The sentence imposed by U.S. District Court Judge Douglas P. Woodlock was for Gonzalez's role in a hacking ring that broke into computer networks of Heartland Payment Systems, which processed credit and debit card transactions for Visa and American Express, Hannaford Supermarkets and 7-Eleven. The sentence is actually 20 years and one day, owing to the need to deal with peculiarities in sentencing statutes, because Woodlock had to take into account that Gonzalez was on pretrial release for an unrelated crime when he took up with the international network of hackers responsible for the security breaches. He was at the time supposed to be serving as an informant for the U.S. Secret Service, but he double-crossed the agency, supplying a co-conspirator with information obtained as part of those investigations" (http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2010/03/26/urnidgns852573C400693880002576EF004839D0.DTL, accessed 03-27-2010).

View Map + Bookmark Entry

181,277,835 Active Websites September 2008

According to a Netcraft survey in September 2008 there were 181,277,835 active websites on the Internet.

View Map + Bookmark Entry

Craiglist Becomes the Leading Classified Advertising Service Worldwide September 2008

By September 2008 Craigslist was the leading classified advertising service worldwide. It provided free local classifieds and forums for more than 550 cities in over 50 countries, generating more than 12 billion page views per month, used by more than 50 million people each month. Craigslist users self-published more than 30 million new classified ads each month and more than 2 million new job listings each month. Each month Craigslist also posted more than 100 million user postings in more than 100 topical forms. All of this it did with only 25 employees.

Because Craigslist did not charge for classified advertising it replaced a large portion of the classified advertising that historically was placed in print newspapers. By doing so it substantially reduced the significant revenue that print newspapers historically generated from classified advertising. This contributed to an overall reduction of profits for many print newspapers. Similarly, Craigslist's policy of charging below-market rates for job listings impacted that traditional source of newspaper revenue, and impacted profits at physical employment agencies, and the more expensive online employment agencies.

View Map + Bookmark Entry

The First Reported Case of ZZZ-Mailing December 15, 2008

"A WOMAN in a deep sleep sent emails to friends asking them over for wine and caviar in what doctors believe is the first reported case of 'zzz-mailing' - using the internet while asleep.

"The case of the 44-year-old woman is reported by researchers from the University of Toledo in the latest edition of the medical journal Sleep Medicine.

"They said the woman went to bed about 10pm but got up two hours later and walked to her computer in the next room, Britain's Daily Mail newspaper reports.

"She turned it on, connected to the internet, and logged on before composing and sending three emails.

"Each was in a random mix of upper and lower cases, not well formatted and written in strange language, the researchers said.

"One read: "Come tomorrow and sort this hell hole out. Dinner and drinks, 4pm,. Bring wine and caviar only."

"Another said simply, "What the…".

"The new variation of sleepwalking has been described as "zzz-mailing".

"We believe writing an email after turning the computer on, connecting to the internet and remembering the password displayed by our patient is novel," the researchers said.

"To our knowledge this type of complex behaviour requiring co-ordinated movements has not been reported before in sleepwalking" (http://www.news.com.au/technology/story/0,28348,24802639-5014239,00.html, accessed 12-30-2008)

View Map + Bookmark Entry

"The Future of Learning Institutions in a Digital Age" 2009

In 2009 American educator Cathy N. Davidson of Duke University, and David Theo Goldberg, of the University of California at Irvine, with support of the John D. and Catherine T. MacArthur Foundation grant making initiative on Digital Media and Learning, published The Future of Learning Institutions in a Digital Age.

View Map + Bookmark Entry

"Readability" is Launched 2009

Readability was launched by Arc90 in New York.  This service automatically stripped websites of advertising and other distractions, providing a customized reading view, and method of saving articles for future reading.

View Map + Bookmark Entry

The First iPhone and iPad Apps for the Visually Impaired 2009 – 2010

Because of the convenience of carrying smart phones it was probably inevitable that their features would be applied to support the visually impaired. iBlink Radio introduced in July 2010 by Serotek Corporation of Minneapolis, Minnesota, calls itself the first iOS application for the visually impaired. It provides access to radio stations, podcasts and reading services of special interest to blind and visually impaired persons, as well as their friends, family, caregivers and those wanting to know what life is like without eyesight.

SayText, also introduced in 2010 by Haave, Inc. of Vantaa, Finland, reads out loud text that is photographed by a cell phone camera.

VisionHunt, by VI Scientific of Nicosia, Cyprus, introduced in 2009, is a vision aid tool for the blind and the visually impaired that uses the phone’s camera to detect colors, paper money and light sources. VisionHunt identifies about 30 colors. It also detects 1, 5, 10, 20, 50 US Dollar bills. Finally, VisionHunt detects sources of light, such as switched-on lamps or televisions. VisionHunt is fully accessible to the blind and the visually impaired through Voice Over or Zoom.

Numerous other apps for the visually impaired were introduced after the above three.

View Map + Bookmark Entry

In 2008 China Becomes the Top User of the Internet January 14, 2009

"BEIJING, China (CNN) January 14, 2009 -- China surpassed the United States in 2008 as the world's top user of the Internet, according to a government-backed research group.

"The number of Web surfers in the country grew by nearly 42 percent to 298 million, according to the China Internet Network Information Center's January report. And there's plenty of room for growth, as only about 1 in every 4 Chinese has Internet access.  

"The rapid growth in China's Internet use can be tied to its swift economic gains and the government's push for the construction of telephone and broadband lines in the country's vast rural areas, the report says.  

"The Chinese government wants phone and broadband access in each village by 2010.

"Nearly 91 percent of China's Internet users are surfing the Web with a broadband connection -- an increase of 100 million from 2007. Mobile phone Internet users totaled 118 million by the end of 2008" (http://www.cnn.com/2009/TECH/01/14/china.internet/index.html, accessed 01-13-2010).

View Map + Bookmark Entry

The TV Show "Jeopardy" Provides a Good Model of the Semantic Analysis and Integration Problem April 22, 2009

On April 22, 2009 David Ferrucci, leader of the Semantic Analysis and Integration Department at IBM's T. J. Watson's Research Center, and Eric Nyberg, and several co-authors published the IBM Research Report: Towards the Open Advancement of Question Answering Systems.

Section 4.2.3. of the report included an analysis of why the television game show Jeopardy! provided a good model of the semantic analysis and integration problem.

View Map + Bookmark Entry

Kickstarter.com is Launched April 28, 2009

On April 28, 2009 Perry Chen, Yancey Strickler, and Charles Adler launched Kickstarter.com, originally under the url of KickStartr.com. The company was based in New York City.

"One of a number of fundraising platforms dubbed 'crowd funding,' Kickstarter facilitates gathering monetary resources from the general public, a model which circumvents many traditional avenues of investment. Project creators choose a deadline and a goal minimum of funds to raise. If the chosen goal is not gathered by the deadline, no funds are collected (this is known as a provision point mechanism). Money pledged by donors is collected using Amazon Payments. The platform is open to backers from anywhere in the world and to creators from the US or the UK.

"Kickstarter takes 5% of the funds raised. Amazon charges an additional 3–5%. Unlike many forums for fundraising or investment, Kickstarter claims no ownership over the projects and the work they produce. However, projects launched on the site are permanently archived and accessible to the public. After funding is completed, projects and uploaded media cannot be edited or removed from the site" (Wikipedia article on Kickstarter, accessed 02-21-2013).

View Map + Bookmark Entry

The Death of Michael Jackson Impacts the Internet June 25, 2009

The death of American entertainer Michael Jackson on June 25, 2009 had a remarkably dramatic impact on the Internet:

"The news of Jackson's death spread quickly online, causing websites to crash and slow down from user overload. Both TMZ and the Los Angeles Times, two websites that were the first to confirm the news, suffered outages. Google believed the millions of people searching 'Michael Jackson' meant it was under attack. Twitter reported a crash, as did Wikipedia at 3:15 PDT. The Wikimedia Foundation reported nearly one million visitors to the article Michael Jackson within one hour, which they said may be the most visitors in a one-hour period to any article in Wikipedia's history. AOL Instant Messenger collapsed for 40 minutes. AOL called it a seminal moment in Internet history,' adding, 'We've never seen anything like it in terms of scope or depth.' Around 15 percent of Twitter posts (or 5,000 tweets per minute) mentioned Jackson when the news broke, compared to topics such as the 2009 Iranian election and swine flu, which never rose above 5 percent of total tweets. Overall, web traffic was 11 percent higher than normal" (Wikipedia article on Death of Michael Jackson, accessed 07-04-2009).

View Map + Bookmark Entry

1.7 Billion Internet Users September 30, 2009

According to Internetworldstats.com there were about 1,733,993,000 Internet users on September 30, 2009. This compared with about 360,985,000 on December 31, 2000.

View Map + Bookmark Entry

2010 – 2012

World Texting Competition is Won by Koreans January 14, 2010

The first LG Mobile Worldcup SMS texting championship took place in New York on January 14, 2010.

“ 'When others watch me texting, they think I’m not that fast and they can do better,' said Mr. Bae, 17, a high school dropout who dyes his hair a light chestnut color and is studying to be an opera singer.'So far, I’ve never lost a match.'

"In the New York competition he typed six characters a second. 'If I can think faster I can type faster,' he said.

"The inaugural Mobile World Cup, hosted by the South Korean cellphone maker LG Electronics, brought together two-person teams from 13 countries who had clinched their national titles by beating a total of six million contestants. Marching behind their national flags, they gathered in New York on Jan. 14 for what was billed as an international clash of dexterous digits" (http://www.nytimes.com/2010/01/28/world/asia/28seoul.html, accessed 01-28-2010).

View Map + Bookmark Entry

Steve Jobs Introduces the iPad, the First Widely Sold Tablet Computer January 27, 2010

On January 27, 2010 Steve Jobs of Apple introduced the iPad, the first widely sold tablet computer. The first iPad was one-half inch thick, with a 9.7 inch, high resolution color touchscreen (multi-touch) diagonal display, powered by a 1-gigahertz Apple A4 chip and 16 to 64 gigabytes of flash storage, weighing 1.5 pounds and capable of running all iPhone applications, except presumably, the phone. The battery life was supposed to be 10 hours, and the device was supposed to hold a charge for 1 month in standby. The price started at $499.00.

"The new device will have to be far better than the laptop and smartphone at doing important things: browsing the Web, doing e-mail, enjoying and sharing photographs, watching videos, enjoying your music collection, playing games, reading e-books. Otherwise, 'it has no reason for being.'" (http://bits.blogs.nytimes.com/2010/01/27/live-blogging-the-apple-product-announcement/?hp, accessed 01-27-2010).

View Map + Bookmark Entry

"The Data-Driven Life" April 20, 2010

On April 20,, 2010 writer Gary Wolf published "The Data-Driven Life" in The New York Times Magazine:

". . . . Another person I’m friendly with, Mark Carranza — he also makes his living with computers — has been keeping a detailed, searchable archive of all the ideas he has had since he was 21. That was in 1984. I realize that this seems impossible. But I have seen his archive, with its million plus entries, and observed him using it. He navigates smoothly between an interaction with somebody in the present moment and his digital record, bringing in associations to conversations that took place years earlier. Most thoughts are tagged with date, time and location. What for other people is an inchoate flow of mental life is broken up into elements and cross-referenced.  

"These men all know that their behavior is abnormal. They are outliers. Geeks. But why does what they are doing seem so strange? In other contexts, it is normal to seek data. A fetish for numbers is the defining trait of the modern manager. Corporate executives facing down hostile shareholders load their pockets full of numbers. So do politicians on the hustings, doctors counseling patients and fans abusing their local sports franchise on talk radio. Charles Dickens was already making fun of this obsession in 1854, with his sketch of the fact-mad schoolmaster Gradgrind, who blasted his students with memorized trivia. But Dickens’s great caricature only proved the durability of the type. For another century and a half, it got worse.

"Or, by another standard, you could say it got better. We tolerate the pathologies of quantification — a dry, abstract, mechanical type of knowledge — because the results are so powerful. Numbering things allows tests, comparisons, experiments. Numbers make problems less resonant emotionally but more tractable intellectually. In science, in business and in the more reasonable sectors of government, numbers have won fair and square. For a long time, only one area of human activity appeared to be immune. In the cozy confines of personal life, we rarely used the power of numbers. The techniques of analysis that had proved so effective were left behind at the office at the end of the day and picked up again the next morning. The imposition, on oneself or one’s family, of a regime of objective record keeping seemed ridiculous. A journal was respectable. A spreadsheet was creepy.  

"And yet, almost imperceptibly, numbers are infiltrating the last redoubts of the personal. Sleep, exercise, sex, food, mood, location, alertness, productivity, even spiritual well-being are being tracked and measured, shared and displayed. On MedHelp, one of the largest Internet forums for health information, more than 30,000 new personal tracking projects are started by users every month. Foursquare, a geo-tracking application with about one million users, keeps a running tally of how many times players “check in” at every locale, automatically building a detailed diary of movements and habits; many users publish these data widely. Nintendo’s Wii Fit, a device that allows players to stand on a platform, play physical games, measure their body weight and compare their stats, has sold more than 28 million units.  

"Two years ago, as I noticed that the daily habits of millions of people were starting to edge uncannily close to the experiments of the most extreme experimenters, I started a Web site called the Quantified Self with my colleague Kevin Kelly. We began holding regular meetings for people running interesting personal data projects. I had recently written a long article about a trend among Silicon Valley types who time their days in increments as small as two minutes, and I suspected that the self-tracking explosion was simply the logical outcome of this obsession with efficiency. We use numbers when we want to tune up a car, analyze a chemical reaction, predict the outcome of an election. We use numbers to optimize an assembly line. Why not use numbers on ourselves?  

"But I soon realized that an emphasis on efficiency missed something important. Efficiency implies rapid progress toward a known goal. For many self-trackers, the goal is unknown. Although they may take up tracking with a specific question in mind, they continue because they believe their numbers hold secrets that they can’t afford to ignore, including answers to questions they have not yet thought to ask.

"Ubiquitous self-tracking is a dream of engineers. For all their expertise at figuring out how things work, technical people are often painfully aware how much of human behavior is a mystery. People do things for unfathomable reasons. They are opaque even to themselves. A hundred years ago, a bold researcher fascinated by the riddle of human personality might have grabbed onto new psychoanalytic concepts like repression and the unconscious. These ideas were invented by people who loved language. Even as therapeutic concepts of the self spread widely in simplified, easily accessible form, they retained something of the prolix, literary humanism of their inventors. From the languor of the analyst’s couch to the chatty inquisitiveness of a self-help questionnaire, the dominant forms of self-exploration assume that the road to knowledge lies through words. Trackers are exploring an alternate route. Instead of interrogating their inner worlds through talking and writing, they are using numbers. They are constructing a quantified self.  

"UNTIL A FEW YEARS ago it would have been pointless to seek self-knowledge through numbers. Although sociologists could survey us in aggregate, and laboratory psychologists could do clever experiments with volunteer subjects, the real way we ate, played, talked and loved left only the faintest measurable trace. Our only method of tracking ourselves was to notice what we were doing and write it down. But even this written record couldn’t be analyzed objectively without laborious processing and analysis.  "Then four things changed. First, electronic sensors got smaller and better. Second, people started carrying powerful computing devices, typically disguised as mobile phones. Third, social media made it seem normal to share everything. And fourth, we began to get an inkling of the rise of a global superintelligence known as the cloud.

"Millions of us track ourselves all the time. We step on a scale and record our weight. We balance a checkbook. We count calories. But when the familiar pen-and-paper methods of self-analysis are enhanced by sensors that monitor our behavior automatically, the process of self-tracking becomes both more alluring and more meaningful. Automated sensors do more than give us facts; they also remind us that our ordinary behavior contains obscure quantitative signals that can be used to inform our behavior, once we learn to read them."

". . . . Adler’s idea that we can — and should — defend ourselves against the imposed generalities of official knowledge is typical of pioneering self-trackers, and it shows how closely the dream of a quantified self resembles therapeutic ideas of self-actualization, even as its methods are startlingly different. Trackers focused on their health want to ensure that their medical practitioners don’t miss the particulars of their condition; trackers who record their mental states are often trying to find their own way to personal fulfillment amid the seductions of marketing and the errors of common opinion; fitness trackers are trying to tune their training regimes to their own body types and competitive goals, but they are also looking to understand their strengths and weaknesses, to uncover potential they didn’t know they had. Self-tracking, in this way, is not really a tool of optimization but of discovery, and if tracking regimes that we would once have thought bizarre are becoming normal, one of the most interesting effects may be to make us re-evaluate what “normal” means" (http://www.nytimes.com/2010/05/02/magazine/02self-measurement-t.html?pagewanted=7&ref=magazine, accessed 05-07-2010).

View Map + Bookmark Entry

Cell Phones Are Now Used More for Data than Speech May 13, 2010

According to The New York Times, in May 2010 people were using their cell phones more for text messaging and data-processing than for speech. This should not come as a surprise to anyone with teen-age children.

". . . although almost 90 percent of households in the United States now have a cellphone, the growth in voice minutes used by consumers has stagnated, according to government and industry data.  

"This is true even though more households each year are disconnecting their landlines in favor of cellphones.  

"Instead of talking on their cellphones, people are making use of all the extras that iPhones, BlackBerrys and other smartphones were also designed to do — browse the Web, listen to music, watch television, play games and send e-mail and text messages.  

"The number of text messages sent per user increased by nearly 50 percent nationwide last year, according to the CTIA, the wireless industry association. And for the first time in the United States, the amount of data in text, e-mail messages, streaming video, music and other services on mobile devices in 2009 surpassed the amount of voice data in cellphone calls, industry executives and analysts say. 'Originally, talking was the only cellphone application,' said Dan Hesse, chief executive of Sprint Nextel. 'But now it’s less than half of the traffic on mobile networks.'  

"Of course, talking on the cellphone isn’t disappearing entirely. 'Anytime something is sensitive or is something I don’t want to be forwarded, I pick up the phone rather than put it into a tweet or a text,' said Kristen Kulinowski, a 41-year-old chemistry teacher in Houston. And calling is cheaper than ever because of fierce competition among rival wireless networks.  

"But figures from the CTIA show that over the last two years, the average number of voice minutes per user in the United States has fallen (http://www.nytimes.com/2010/05/14/technology/personaltech/14talk.html?hp, accessed 05-14-2010).

View Map + Bookmark Entry

Data on Mobile Networks is Doubling Each Year August 1, 2010

"The volume of data on the world’s mobile networks is doubling each year, according to Cisco Systems, the U.S. maker of routers and networking equipment. By 2014, it estimates, the monthly data flow will increase about sixteenfold, to 3.6 billion gigabytes from 220.1 million" (http://www.nytimes.com/2010/08/02/technology/02iht-NETPIPE02.html?src=un&feedurl=http://json8.nytimes.com/pages/business/global/index.jsonp, accessed 08-01-2010)

View Map + Bookmark Entry

"Every Two Days We Create as Much Information as We Did up to 2003" August 4, 2010

"Today at the Techonomy conference in Lake Tahoe, CA, the first panel featured Google CEO Eric Schmidt. As moderator David Kirkpatrick was introducing him, he rattled off a massive stat.

"Every two days now we create as much information as we did from the dawn of civilization up until 2003, according to Schmidt. That’s something like five exabytes of data, he says.  

Let me repeat that: we create as much information in two days now as we did from the dawn of man through 2003.  

“ 'The real issue is user-generated content,' Schmidt said. He noted that pictures, instant messages, and tweets all add to this.  

"Naturally, all of this information helps Google. But he cautioned that just because companies like his can do all sorts of things with this information, the more pressing question now is if they should. Schmidt noted that while technology is neutral, he doesn’t believe people are ready for what’s coming.  

“ 'I spend most of my time assuming the world is not ready for the technology revolution that will be happening to them soon,' Schmidt said" (http://techcrunch.com/2010/08/04/schmidt-data/, accessed 12-19-2012).
View Map + Bookmark Entry

"The Social Network": The Origins of Facebook October 1, 2010

In October 2010 he drama film, The Social Network, based on the book, The Accidental Billionaires: The Founding of Facebook, a Tale of Sex, Money, Genius, and Betrayal, by Ben Mezrich, was released by Columbia Pictures.

The book was adapted for the screen by Aaron Sorkin and directed by David Fincher. Jesse Eisenberg portrayed the founder of Facebook, Mark Zuckerberg, to considerable critical acclaim.

Zuckerberg has been widely acknowledged as a programming prodigy. The film portrays him not only in that way, but as so focused on programming, and so insensitive to other people's feelings as to be almost autistic. One can hardly imagine that anyone as overly focused on programming as Zuckerberg is portrayed in the film could have understood the social nuances sufficiently to build it into the world's top social media site. The Wikipedia article on Zuckerberg indicates that he is more well-rounded than characterized in the film, having a strong background in classics and fond of quoting from Greek and Latin literature, especially epic poetry. 

♦ On January 30, 2010 Jesse Eisenberg and Mark Zuckerberg briefly appeared together on Saturday Night Live: 

View Map + Bookmark Entry

Columbia University Opens the Tow Center for Digital Journalism October 19, 2010

On October 19, 2010 the Tow Center for Digital Journalism offically opened at Columbia Journalism School, reflecting the development of the most significant new journalistic media since television, and resultant changes in the news industry. The first director of the Tow Center was Emily Bell, who had previously led the development of digital content at TheGuardian.com.

Among its features, the Tow Center also helps oversee the dual-degree Master of Science Program in Computer Science and Journalism offered in conjunction with Columbia’s Fu Foundation School of Engineering and Applied Science.These students receive highly specialized training in the digital environment, enabling them to develop technical and editorial skills in all aspects of computer-supported news gathering and digital media production.

View Map + Bookmark Entry

Towards a New Digital Legal Information Environment November 9, 2010

On November 9, 2010 John G. Palfrey, Henry N. Ess III Professor of Law, Vice Dean, Library and Information Resources, Faculty Co-Director, Berkman Center for Internet and Society at Harvard Law School, proposed a new digital legal information environment for the future.

In a lecture summary published in his blog Palfrey wrote: 

"I propose a path toward a new legal information environment that is predominantly digital in nature. This new era grows out of a long history of growth and change in the publishing of legal information over more than nine hundred years, from the early manuscripts at the roots of English common law in the reign of the Angevin King Henry II; through the early printed treatises of Littleton and Coke in the fifteenth, sixteenth, and seventeenth centuries, (including those in the extraordinary collection of Henry N. Ess III); to the systemic improvements introduced by Blackstone in the late eighteenth century; to the modern period, ushered in by Langdell and West at the end of the nineteenth century. Now, we are embarking upon an equally ambitious venture to remake the legal information environment for the twenty-first century, in the digital era.  

"We should learn from advances in cloud computing, the digital naming systems, and youth media practices, as well as classical modes of librarianship, as we envision – and, together, build – a new system for recording, indexing, writing about, and teaching what we mean by the law. A new legal information environment, drawing comprehensively from contemporary technology, can improve access to justice by the traditionally disadvantaged, including persons with disabilities; enhance democracy; promote innovation and creativity in scholarship and teaching; and promote economic development. This new legal information architecture must be grounded in a reconceptualization of the public sector’s role and draw in private parties, such as Google, Amazon, Westlaw, and LexisNexis, as key intermediaries to legal information.  

"This new information environment will have unintended – and sometimes negative – consequences, too. This trajectory toward openness is likely to change the way that both professionals and the public view the law and the process of lawmaking. Hierarchies between those with specialized knowledge and power and those without will continue its erosion. Lawyers will have to rely upon an increasingly broad range of skills, rather than serving as gatekeepers to information, to command high wages, just as new gatekeepers emerge to play increasingly important roles in the legal process. The widespread availability of well-indexed digital copies of legal work-products will also affect the ways in which lawmakers of all types think and speak in ways that are hard to anticipate. One indirect effect of these changes, for instance, may be a greater receptivity on the part of lawmakers to calls for substantive information privacy rules for individuals in a digital age.  

"An effective new system will not emerge on its own; the digital environment, like the physical, is a built environment. As lawyers, teachers, researchers, and librarians, we share an interest in the way in which legal information is created, stored, accessed, manipulated, and preserved over the long term. We will have to work together to overcome several stumbling blocks, such as state-level assertions of copyright. As collaborators, we could design and develop it together over the next decade or so. The net result — if we get it right — will be improvements in the way we teach and learn about the law and how the system of justice functions" (http://blogs.law.harvard.edu/palfrey/2010/11/09/henry-n-ess-iii-chair-lecture-notes/, accessed 12-10-2010).

View Map + Bookmark Entry

Apple 1 Computers Sell for $210,000 in 2010, for $671,400 in 2013, for $905,000 and $365,000 in 2014 November 23, 2010 – October 22, 2014

An original Apple 1 personal computer in excellent condition but with a few later modifications, sold for 110,000 pounds or $174,000 hammer at a Christie's book and manuscript auction in London. (Christie's sale 7882, lot 65).

Associated Press reported that the purchaser was businessman and collector Marco Boglione of Torino, Italy, who bid by phone. His total cost came to 133,250 pounds or about $210,000 after the buyer's premium. Prior to the auction, Christie's estimated the computer would sell for between $160,000-$240,000. When it was released in 1976, the Apple I sold for $666.66.

Only about 200 Apple 1's were built, of which perhaps "30 to 50" remain in existence. The auctioned example came in its original box with a signed letter from Apple cofounder Steve Jobs.

Apple cofounder Steve Wozniak, who hand-built each of the Apple 1's, attended the auction, and offered to autograph the computer.  

See also: http://www.mercurynews.com/news/ci_16695428?source=rss&nclick_check=1, accessed 11-23-2010.

At Sotheby's in 2012 another Apple 1 sold for $374,500. In November 2012 still another Apple 1 sold for $640,000 at Auction Team Breker in Cologne, Germany.

On May 25, 2013 Uwe Breker auctioned another Apple 1 for $671,400.

On October 22, 2014 Bonhams in New York sold another Apple 1 for $905,000. The buyer was the Henry Ford Museum in Deerborn Michigan. "In addition to the beautifully intact motherboard, this Apple-1 comes with a vintage keyboard with pre-7400 series military spec chips, a vintage Sanyo monitor, a custom vintage power supply in wooden box, as well as two vintage tape-decks. The lot additionally includes ephemera from the Cincinnati AppleSiders such as their first newsletter "Poke-Apple" from February of 1979 and a video recording of Steve Wozniak's keynote speech at the 1980 'Applevention.' "

On December 11 Christie's in New York offered The Ricketts’ Apple-1 Personal Computer in an online auctionNamed after its first owner Charles Ricketts, this example was the only known surviving Apple-1 documented to have been sold directly by Steve Jobs to an individual from his parents’ garage.

"23 years after Ricketts bought the Apple-1 from Jobs in Los Altos, it was acquired by Bruce Waldack, a freshly minted entrepreneur who’d just sold his company DigitalNation.  The Ricketts Apple-1 was auctioned at a sheriff’s sale of Waldack’s property at a self-storage facility in Virginia in 2004, and won by the present consigner, the American collector, Bob Luther.

  • The Ricketts Apple-1 is fully operational, having been serviced and started by Apple-1 expert Corey Cohen in October 2014. Mr. Cohen ran the standard original software program, Microsoft BASIC, and also an original Apple-1 Star Trek game in order to test the machine.
  • The computer will be sold with the cancelled check from the original garage purchase on July 27, 1976 made out to Apple Computer by Charles Ricketts for $600, which Ricketts later labeled as “Purchased July 1976 from Steve Jobs in his parents’ garage in Los Altos”. 
  • A second cancelled check for $193 from August 5, 1976 is labeled “Software NA Programmed by Steve Jobs August 1976”. Although Jobs is not usually thought of as undertaking much of the programming himself, many accounts of the period place him in the middle of the action, soldering circuits and clearly making crucial adjustments for close customers, as in this case.
  • These checks were later used as part of the evidence for the City of Los Altos to designate the Jobs family home at 2066 Crist Drive as a Historic Resource, eligible for listing on the National Register of Historic Places, and copies can be found in the Apple Computer archives at Stanford University Libraries."

The price realized was $365,000, which was, of course, diaappointing compared to the much higher price realized on Bonhams only two months earlier.

View Map + Bookmark Entry

The Wikileaks U. S. Diplomatic Cables Leak November 28 – December 8, 2010

"The United States diplomatic cables leak began on 28 November 2010 when the website WikiLeaks and five major newspapers published confidential documents of detailed correspondences between the U.S. State Department and its diplomatic missions around the world. The publication of the U.S. embassy cables is the third in a series of U.S. classified document 'mega-leaks' distributed by WikiLeaks in 2010, following the Afghan War documents leak in July, and the Iraq War documents leak in October. The contents of the cables describe international affairs from 274 embassies dated from 1966–2010, containing diplomatic analysis of world leaders, an assessment of host countries, and a discussion about international and domestic issues.

"The first 291 of the 251,287 documents were published on 28 November, with simultaneous press coverage from El País (Spain), Le Monde (France), Der Spiegel (Germany), The Guardian (United Kingdom), and The New York Times (United States). Over 130,000 of the documents are unclassified; none are classified as 'top secret' on the classification scale; some 100,000 are labeled 'confidential'; and about 15,000 documents have the higher classification 'secret'. As of December 8, 2010 1060 individual cables had been released. WikiLeaks plans to release the entirety of the cables in phases over several months" (Wikipedia article on United States diplomatic cables leak, accessed 12-08-2010).

View Map + Bookmark Entry

The Website of MasterCard is Hacked by Wikileaks Supporters December 8, 2010

"The website of MasterCard has been hacked and partially paralysed in apparent revenge for the international credit card's decision to cease taking donations to WikiLeaks. A group of online activists calling themselves Anonymous appear to have orchestrated a DDOS ('distributed denial of service') attack on the site, bringing its service at www.mastercard.com to a halt for many users. " 'Operation: Payback' is the latest salvo in the increasingly febrile technological war over WikiLeaks. MasterCard announced on Monday that it would no longer process donations to the whistleblowing site, claiming it was engaged in illegal activity.  

"The group, which has been linked to the influential internet messageboard 4Chan, has been targeting commercial sites which have cut their ties with WikiLeaks. The Swiss bank PostFinance has already been targeted by Anonymous after it froze payments to WikiLeaks, and the group has vowed to target Paypal, which has also ceased processing payments to the site. Other possible targets are EveryDNS.net, which suspended dealings on 3 December, Amazon, which removed WikiLeaks content from its EC2 cloud on 1 December, and Visa, which suspended its own dealings yesterday.  

"The action was confirmed on Twitter at 9.39am by user @Anon_Operation, who later tweeted: 'WE ARE GLAD TO TELL YOU THAT http://www.mastercard.com/ is DOWN AND IT'S CONFIRMED! #ddos #wikileaks Operation:Payback(is a bitch!) #PAYBACK'

"No one from MasterCard could be reached for immediate comment, but a spokesman, Chris Monteiro, has said the site suspended dealings with WikiLeaks because 'MasterCard rules prohibit customers from directly or indirectly engaging in or facilitating any action that is illegal'.  

"DDOS attacks, which often involve flooding the target with requests so that it cannot cope with legitimate communication, are illegal" (http://www.guardian.co.uk/media/2010/dec/08/mastercard-hackers-wikileaks-revenge, accessed 12-08-2010).

View Map + Bookmark Entry

The Digital Public Library of America December 13, 2010

On December 13, 2010 John Palfrey and The Berkman Center for Internet & Society at Harvard announced that it would begin coordinating plans for a Digital Public Library of America. This initiative was stimulated by an article published by Robert Darnton in the New York Review of Books on on October 4, 2010 entitled "A Library Without Walls."

Related to the Berkman Center's announcement, an article appeared in Libraryjournal.com by Michael Kelly on December 15, 2010: "New Plan Seeks a 'Big Tent' for a National Digital Library." 

View Map + Bookmark Entry

The Cultural Observatory at Harvard Introduces Culturomics December 16, 2010

On December 16, 2010 a highly interdisciplinary group of scientists, primarily from Harvard University: Jean-Baptiste Michel,Yuan Kui Shen, Aviva P. Aiden, Adrian Veres, Matthew K. Gray, The Google Books Team, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak and Erez Lieberman Aiden published "Quantitative Analysis of Culture Using Millions of Digitized Books," Science, Published Online December 16 2010 Science 14 January 2011: Vol. 331 no. 6014 pp. 176-182 DOI: 10.1126/science.1199644

The authors were associated with the following organizations: Program for Evolutionary Dynamics, Institute for Quantitative Social Sciences Department of Psychology, Department of Systems Biology Computer Science and Artificial Intelligence Laboratory, Harvard Medical School, Harvard College Google, Inc. Houghton Mifflin Harcourt Encyclopaedia Britannica, Inc. Department of Organismic and Evolutionary Biology Department of Mathematics, Broad Institute of Harvard and MITCambridge School of Engineering and Applied Sciences Harvard Society of Fellows, Laboratory-at-Large.

This paper from the Cultural Observatory at Harvard and collaborators represented the first major publication resulting from The Google Labs N-gram (Ngram) Viewer,

"the first tool of its kind, capable of precisely and rapidly quantifying cultural trends based on massive quantities of data. It is a gateway to culturomics! The browser is designed to enable you to examine the frequency of words (banana) or phrases ('United States of America') in books over time. You'll be searching through over 5.2 million books: ~4% of all books ever published" (http://www.culturomics.org/Resources/A-users-guide-to-culturomics, accessed 12-19-2010).

"We constructed a corpus of digitized texts containing about 4% of all books ever printed. Analysis of this corpus enables us to investigate cultural trends quantitatively. We survey the vast terrain of "culturomics", focusing on linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000. We show how this approach can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology. "Culturomics" extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities" (http://www.sciencemag.org/content/early/2010/12/15/science.1199644, accessed 12-19-2010).  

"The Cultural Observatory at Harvard is working to enable the quantitative study of human culture across societies and across centuries. We do this in three ways: Creating massive datasets relevant to human culture Using these datasets to power wholly new types of analysis Developing tools that enable researchers and the general public to query the data" (http://www.culturomics.org/cultural-observatory-at-harvard, accessed 12-19-2010). 

View Map + Bookmark Entry

Founder of Wikileaks to Publish his Autobiography December 27, 2010

To pay for ongoing defence costs, Australian journalist, publisher, and Internet activist Julian Assange, the founder of WikiLeaks, stated in December 2010 that he would release an autobiography next year, having signed publishing deals that he told a British newspaper might be worth $1.7 million. Apart from the censorship and political elements of this case, the book contract underlined the commercial distinctions between commercial book publishing and many websites which generate little or no revenue, as for example Wikileaks, which is intentionally non-profit.

"Mr. Assange told The Sunday Times of London that he had signed an $800,000 deal with Alfred A. Knopf, an imprint of Random House, in the United States, and a $500,000 deal with Canongate books in Britain. With further rights and serialization, he told the newspaper, he expected his earnings to rise to $1.7 million.  

"Paul Bogaards, a spokesman for Random House, said Monday that the book would be 'a complete account of his life through the present day, including the founding of WikiLeaks and the work he has done there.' The deal, Mr. Bogaards said, was initiated by one of Mr. Assange’s lawyers in mid-December and was signed in a matter of days. He would not discuss the financial terms. Canongate has not yet made a public comment but has spoken of its own deal in messages on Twitter.

“ 'I don’t want to write this book, but I have to,' Mr. Assange told the newspaper, explaining that his legal costs in fighting extradition to Sweden, where he is wanted for questioning about allegations of sexual misconduct, have reached more than $300,000. 'I need to defend myself and to keep WikiLeaks afloat,' he said.  

"Mr. Assange is under what he has called 'high-tech house arrest' in an English mansion while he awaits hearings, beginning Jan. 11, regarding those allegations. Two women in Stockholm have accused him of rape, unlawful coercion and two counts of sexual molestation over a four-day period last August. He has repeatedly denied any wrongdoing in the matter, and has called the case 'a smear campaign' led by those who seek to stop him from leaking classified government and corporate documents" (http://www.nytimes.com/2010/12/28/world/europe/28wiki.html?_r=1&hpw, accessed 12-28-2010).

View Map + Bookmark Entry

Facebook is the Most Searched for and Most Visited Website in America December 29, 2010

"Facebook was not only the most searched item of the year, but it passed Google as America’s most-visited website in 2010, according to a new report from Experian Hitwise.  

"For the second year in a row, 'facebook' was the top search term among U.S. Internet users. The search term accounted for 2.11% of all searches, according to Hitwise. Even more impressive is the fact that three other variations of Facebook made it into the top 10: “facebook login” at #2, 'facebook.com' at #6 and “www.facebook.com” at #9. Combined, they accounted for 3.48% of all searches, a 207% increase from Facebook’s position last year.  

"Rounding out the list of top search terms were YouTube, Craigslist, MySpace, eBay, Yahoo and Mapquest. Other companies that made big moves in terms of searches include Hulu, Netflix, Verizon and ESPN. The search term “games” also made its first appearance in the list of Hitwise’s top 50 search terms.  

"More interesting though is Facebook’s ascension to number one on Hitwise’s list of most-visited websites. The social network accounted for 8.93% of all U.S. visits in 2010 (January-November), beating Google (7.19%), Yahoo Mail (3.52%), Yahoo (3.30%) and YouTube (2.65%). However, Facebook didn’t beat the traffic garnered by all of Google’s properties combined (9.85%).  

"It’s only a matter of time until Facebook topples the entire Google empire, though. We’ve seen the trend develop for months: Facebook is getting bigger than Google. According to comScore, Facebook’s U.S. traffic grew by 55% in the last year and has shown no sign of slowing down" (http://mashable.com/2010/12/29/2010-the-year-facebook-dethroned-google-as-king-of-the-web-stats/, accessed 12-31-2010).

View Map + Bookmark Entry

The Smartphone Becomes the CPU of the Laptop January 2011

Motorola Mobility, headquartered in Libertyville, Illinois, introduced the Atrix 4G smartphone powered by Nvidia's Tegra 2 dual-core  processor and Android 2.2, with a 4-inch display, 1 GB of RAM, 16 GB of on-board storage, front- and rear-facing cameras, a 1930 mAh battery and a fingerprint reader. Motorola announced that it would also sell laptop and desktop docks that run a full version of Firefox, powered entirely by the phone.

What was significant about this smartphone was that the phone could do the information processing for the laptop or even the desktop interfaces.

View Map + Bookmark Entry

More than Ten Billion Apps are Downloaded from the Apple App Store January 22, 2011

On January 22, 2011 the Apple App Store completed its countdown for its Ten Billionth App downloaded from the Apple App Store.

View Map + Bookmark Entry

The New York Times Begins its "Recommendations Service" January 31, 2011

The New York Times rolled out its interactive Recommendations service. When I first looked at this on February 2, 2011 the service reported that I had read 120 articles in the previous month, breaking them down into ten categories. Based on my previous reading history it recommended that I read twenty articles in that day's edition.

View Map + Bookmark Entry

Confession: A Roman Catholic iPhone App February 2011

Confession: A Roman Catholic App by Little i Apps, LLC, South Bend, Indiana:

"Designed to be used in the confessional, this app is the perfect aid for every penitent. With a personalized examination of conscience for each user, password protected profiles, and a step-by-step guide to the sacrament, this app invites Catholics to prayerfully prepare for and participate in the Rite of Penance. Individuals who have been away from the sacrament for some time will find Confession: A Roman Catholic App to be a useful and inviting tool.  

"The text of this app was developed in collaboration with Rev. Thomas G. Weinandy, OFM, Executive Director of the Secretariat for Doctrine and Pastoral Practices of the United States Conference of Catholic Bishops, and Rev. Dan Scheidt, pastor of Queen of Peace Catholic Church in Mishawaka, IN. The app received an imprimatur from Bishop Kevin C. Rhodes of the Diocese of Fort Wayne – South Bend. It is the first known imprimatur to be given for an iPhone/iPad app.

From one of our users which we stand by:

=============================

"it does not and can not take the place of confessing before a validly ordained Roman Catholic priest in a Confessional, in person, either face to face, or behind the screen. Why? Because the Congregation on Divine Worship and the Sacraments has long ruled that Confessions by electronic media are invalid and that ABSOLUTION BY THE PRIEST must be given in person because the Seal of the Confessional must be Protected and for the Sacrament to be valid there has to be both the matter and the form which means THE PRIEST.

============================ -

"Custom examination of Conscience based upon age, sex, and vocation (single, married, priest, or religious)

"- Multiple user support with password protected accounts

"- Ability to add sins not listed in standard examination of conscience - Confession walkthrough including time of last confession in days, weeks, months, and years

"- Choose from 7 different acts of contrition

"- Custom interface for iPad

"- Full retina display support" (http://itunes.apple.com/us/app/confession-a-roman-catholic/id416019676?mt=8#, accessed 02-11-2011)

Cost: $1.99

View Map + Bookmark Entry

42.3% of the U.S. Population Uses Facebook February 2011

"A new report from eMarketer finds that most adult Americans with Internet access use Facebook at least once a month, and a full 42.3% of the entire American population was using the site as of this month.  

"By contrast, Twitter‘s penetration rate was much lower, sitting at around 7% of the total population and 9% of the Internet-using population, according to the report" (http://mashable.com/2011/02/24/facebook-twitter-number/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+Mashable+%28Mashable%29

View Map + Bookmark Entry

4.3 Billion IP Addresses Have Been Allocated February 3, 2011

The Internet Corporation for Assigned Names and Numbers (icann.org) announced that the last remaining IPv4 (Internet Protocol version 4) Internet addresses from the central pool of about 4.3 billlion were allocated.

The next Internet protocol, IPv6, will open up a pool of Internet addresses that is a billion-trillion times larger than the total pool of IPv4 addresses--a supply that should be sufficient for the foreseeable future. 

View Map + Bookmark Entry

Worldwide Technological Capacity to Store, Communicate, and Compute Information February 10, 2011

On February 10, 2011 social scientist Martin Hilbert of the University of Southern California (USC) and information scientist Priscilla López of the Open University of Catalonia published "The World's Technological Capacity to Store, Communicate, and Compute Information." The report appeared first in Science Express; on April 1, 2011 it was published in Science, 332, 60-64. This was "the first time-series study to quantify humankind's ability to handle information." Notably, the authors did not attempt to address the information processing done by human brains—possibly impossible to quantify at the present time, if ever. 

"We estimated the world’s technological capacity to store, communicate, and compute information, tracking 60 analog and digital technologies during the period from 1986 to 2007. In 2007, humankind was able to store 2.9 × 10 20 optimally compressed bytes, communicate almost 2 × 10 21 bytes, and carry out 6.4 × 10 18 instructions per second on general-purpose computers. General-purpose computing capacity grew at an annual rate of 58%. The world’s capacity for bidirectional telecommunication grew at 28% per year, closely followed by the increase in globally stored information (23%). Humankind’s capacity for unidirectional information diffusion through broadcasting channels has experienced comparatively modest annual growth (6%). Telecommunication has been dominated by digital technologies since 1990 (99.9% in digital format in 2007), and the majority of our technological memory has been in digital format since the early 2000s (94% digital in 2007)" (The authors' summary).

"To put our findings in perspective, the 6.4 × 10 18 instructions per second that humankind can carry out on its general-purpose computers in 2007 are in the same ballpark area as the maximum number of nerve impulses executed by one human brain per second (10 17 ). The 2.4 × 10 21 bits stored by humanity in all of its technological devices in 2007 is approaching an order of magnitude of the roughly 10 23 bits stored in the DNA of a human adult, but it is still minuscule as compared with the 10 90 bits stored in the observable universe. However, in contrast to natural information processing, the world’s technological information processing capacities are quickly growing at clearly exponential rates" (Conclusion of the paper).

"Looking at both digital memory and analog devices, the researchers calculate that humankind is able to store at least 295 exabytes of information. (Yes, that's a number with 20 zeroes in it.)

"Put another way, if a single star is a bit of information, that's a galaxy of information for every person in the world. That's 315 times the number of grains of sand in the world. But it's still less than one percent of the information that is stored in all the DNA molecules of a human being. 2002 could be considered the beginning of the digital age, the first year worldwide digital storage capacity overtook total analog capacity. As of 2007, almost 94 percent of our memory is in digital form.

"In 2007, humankind successfully sent 1.9 zettabytes of information through broadcast technology such as televisions and GPS. That's equivalent to every person in the world reading 174 newspapers every day. On two-way communications technology, such as cell phones, humankind shared 65 exabytes of information through telecommunications in 2007, the equivalent of every person in the world communicating the contents of six newspapers every day.

"In 2007, all the general-purpose computers in the world computed 6.4 x 10^18 instructions per second, in the same general order of magnitude as the number of nerve impulses executed by a single human brain. Doing these instructions by hand would take 2,200 times the period since the Big Bang.

"From 1986 to 2007, the period of time examined in the study, worldwide computing capacity grew 58 percent a year, ten times faster than the United States' GDP. Telecommunications grew 28 percent annually, and storage capacity grew 23 percent a year" (http://www.sciencedaily.com/releases/2011/02/110210141219.htm)

View Map + Bookmark Entry

IBM's Watson Question Answering System Defeats Humans at Jeopardy! February 14 – February 16, 2011

LOn February 14, 2011 IBM's Watson question answering system supercomputer, developed at IBM's T J Watson Research Center, Yorktown Heights, New York, running DeepQA software, defeated the two best human Jeopardy! players, Ken Jennings and Brad Rutter. Watson's hardware consisted of 90 IBM Power 750 Express servers. Each server utilized a 3.5 GHz POWER7 eight-core processor, with four threads per core. The system operatesd with 16 terabytes of RAM.

The success of the machine underlines very significant advances in deep analytics and the ability of a machine to process unstructured data, and especially to intepret and speak natural language.

"Watson is an effort by I.B.M. researchers to advance a set of techniques used to process human language. It provides striking evidence that computing systems will no longer be limited to responding to simple commands. Machines will increasingly be able to pick apart jargon, nuance and even riddles. In attacking the problem of the ambiguity of human language, computer science is now closing in on what researchers refer to as the “Paris Hilton problem” — the ability, for example, to determine whether a query is being made by someone who is trying to reserve a hotel in France, or simply to pass time surfing the Internet.  

"If, as many predict, Watson defeats its human opponents on Wednesday, much will be made of the philosophical consequences of the machine’s achievement. Moreover, the I.B.M. demonstration also foretells profound sociological and economic changes.  

"Traditionally, economists have argued that while new forms of automation may displace jobs in the short run, over longer periods of time economic growth and job creation have continued to outpace any job-killing technologies. For example, over the past century and a half the shift from being a largely agrarian society to one in which less than 1 percent of the United States labor force is in agriculture is frequently cited as evidence of the economy’s ability to reinvent itself.  

"That, however, was before machines began to 'understand' human language. Rapid progress in natural language processing is beginning to lead to a new wave of automation that promises to transform areas of the economy that have until now been untouched by technological change.  

" 'As designers of tools and products and technologies we should think more about these issues,' said Pattie Maes, a computer scientist at the M.I.T. Media Lab. Not only do designers face ethical issues, she argues, but increasingly as skills that were once exclusively human are simulated by machines, their designers are faced with the challenge of rethinking what it means to be human.  

"I.B.M.’s executives have said they intend to commercialize Watson to provide a new class of question-answering systems in business, education and medicine. The repercussions of such technology are unknown, but it is possible, for example, to envision systems that replace not only human experts, but hundreds of thousands of well-paying jobs throughout the economy and around the globe. Virtually any job that now involves answering questions and conducting commercial transactions by telephone will soon be at risk. It is only necessary to consider how quickly A.T.M.’s displaced human bank tellers to have an idea of what could happen" (John Markoff,"A Fight to Win the Future: Computers vs. Humans," http://www.nytimes.com/2011/02/15/science/15essay.html?hp, accessed 02-17-2011).

♦ As a result of this technological triumph, IBM took the unusal step of building a colorful website concerning all aspects of Watson, including numerous embedded videos.

♦ A few of many articles on the match published during or immediately after it included:

John Markoff, "Computer Wins on 'Jeopardy!': Trivial, It's Not," http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?hpw

Samara Lynn, "Dissecting IBM Watson's Jeopardy! Game," PC Magazinehttp://www.pcmag.com/article2/0,2817,2380351,00.asp

John C. Dvorak, "Watson is Creaming the Humans. I Cry Foul," PC Magazinehttp://www.pcmag.com/article2/0,2817,2380451,00.asp

Henry Lieberman published a three-part article in MIT Technology Review, "A Worthwhile Contest for Artificial Intelligence" http://www.technologyreview.com/blog/guest/26391/?nlid=4132

♦ An article which discussed the weaknesses of Watson versus a human in Jeopardy! was Greg Lindsay, "How I Beat IBM's Watson at Jeopardy! (3 Times)" http://www.fastcompany.com/1726969/how-i-beat-ibms-watson-at-jeopardy-3-times

♦ An opinion column emphasizing the limitations of Watson compared to the human brain was Stanley Fish, "What Did Watson the Computer Do?" http://opinionator.blogs.nytimes.com/2011/02/21/what-did-watson-the-computer-do/

♦ A critical response to Stanley Fish's column by Sean Dorrance Kelly and Hubert Dreyfus, author of What Computers Can't Dowas published in The New York Times at: http://opinionator.blogs.nytimes.com/2011/02/28/watson-still-cant-think/?nl=opinion&emc=tya1

View Map + Bookmark Entry

Two Billion People Now Use the Internet Regularly February 17, 2011

According to an article in The New York Times, two billion people in the world used the Internet regularly.

In rural America only 60% had broadband connections. 

"Over all, 28 percent of Americans do not use the Internet at all."

View Map + Bookmark Entry

The U. S. National Broadband Map February 17, 2011

The National Broadband Map (NBM), a searchable and interactive website that allows users to view broadband availability across every neighborhood in the United States, was first published.

The NBM was created by the U. S. National Telecommunications and Information Administration (NTIA), in collaboration with the Federal Communications Commission (FCC), and in partnership with 50 states, five territories and the District of Columbia. The NBM is a project of NTIA's State Broadband Initiative. The NBM will be updated approximately every six months. 

View Map + Bookmark Entry

Four Phases of Government Internet Surveillance and Censorship to Date February 25, 2011

Harvard Law professor, and Vice Dean, Library and Information Services, John Palfrey of the OpenNet Initiative wrote in "Middle East Conflict and and Internet Tipping Point" that the OpenNet Initiative had divided the way in which states filtered and practice surveillance over the Internet into four phases: "open Internet," "access denied," "access controlled," and "access contested."

"The first is the 'open Internet' period, from the network's birth through about 2000. In this period, there were few restrictions on the network globally. There was even an argument about whether the network could itself be regulated. This sense of unfettered freedom is a distant memory today.

"In the 'access denied' period that followed, through about 2005, states like China, Saudi Arabia, Tunisia, and dozens of others began to block access to certain information online. They developed technical Internet filtering modes to stop people from reaching certain websites, commonly including material deemed sensitive for political, cultural, or religious reasons.

"The most recent period, 'access controlled,' through 2010 or so, was characterized by the growth in the sophistication with which states began to control the flow of information online. Internet filtering grew in scope and scale, especially throughout Asia, the former Soviet states, and the Middle East and North Africa. Techniques to use the network for surveillance grew dramatically, as did "just-in-time" blocking approaches such as the use of distributed denial-of-service attacks against undesirable content. Overall, states got much more effective at pushing back on the use of the Internet by those who wished to share information broadly and for prodemocratic purposes.

"Today, we are entering a period that we should call 'access contested.' Activists around the world are pushing back on the denial of access and controls put in place by states that wish to restrict the free flow of information. This round of the contest, at least in the Middle East and North Africa, is being won by those who are using the network to organize against autocratic regimes. Online communities such as Herdict.org and peer-to-peer technologies like mesh networking provide specific ways for people to get involved directly in shaping how these technologies develop around the world" (http://www.technologyreview.com/web/32437/?p1=A1, accessed 02-28-2011).

View Map + Bookmark Entry

The Environmental Impacts of eBooks and eBook Readers March 2011

The Green Press Initiative issued a synthesis of various reports on The Environmental Impacts of eBooks and eBook Readers:

"Since the data suggests that sales of E-books are likely to increase while sales of printed books are likely to decrease, it is logical to question the environmental implications of this transition. In 2008 Green Press Initiative and the Book Industry Study Group commissioned a report on the environmental impacts of the U.S. book industry which included a lifecycle analysis of printed books. That report concluded that in 2006 the U.S. book industry consumed approximately 30 million trees and had a carbon footprint equivalent to 12.4 million metric tons of carbon dioxide, or 8.85 pounds per book sold.

"Determining the environmental impacts of an E-book presents a challenge that does not exist in estimating the impacts of a paper book. That challenge is the fact that user behavior will significantly influence the impact of an e-book. This is due to the fact that the manufacturing of the E-reader device accounts for the vast majority of an E-books environmental impact. Because of this, on a per book basis, a reader who reads 100 books on an e-reader will have almost 1/100th of the impact of someone who reads only one book on the same device. Additionally, two readers who each read the same number of books per year, can have a very different per-book environmental impact if one buys a new E-reader every year while the other keeps his for four years before replacing it. Because of the impact that user behavior can have on the environmental impact of E-books, any analysis will either have to make assumptions about the behavior of a “typical” reader of E-books, or else identify a break-even point in terms of the number of books that must be read on an E-reader to offset the environmental impacts of a corresponding number of paper books. However even this can be confused by the fact that it is not clear that reading one E-book offsets one paper book. For example, the ability to instantly download any book at any time may encourage E-reader owners to read more books in which case each e-book read would not necessarily correspond to a printed book that would have been read. Additionally someone who buys a printed book and later lends it to a friend to read would in effect halve the environmental impact of reading that book. As such any analysis should strive to account for this and determine a break-even point in terms of “printed books offset” rather than E-books read. Additional complexity is added by the fact that most E-readers can be used for a variety of tasks other than reading books. For example, most can read newspapers and magazines in addition to books and some E-readers can also read blogs and surf the internet. Tablet computers can allow a user to check e-mail, play games, view photos and videos, listen to music and surf the internet in addition to other things. Thus for the owner of a Tablet computer, who only spends 10% of his time using the tablet to read books, it would seem reasonable to assume that only 10% of the manufacturing impact of the tablet should be counted towards the impact of that users E-books. . . .

"As the number of printed books that the E-reader offsets increases, so do the benefits of that E-reader. At some point these gains offset the impact of manufacturing and using the E-reader. This “breakeven point” will be different for different metrics of environmental performance but for most it is likely somewhere between 30 and 70 printed books that are offset over the lifetime of the E-reader. For greenhouse gas emissions this number is probably between 20 and 35 books while for measures of human health impacts the number is probably closer to 70 books. In assessing the impact of an E-reader the idea of printed books offset must be carefully considered. As mentioned above, if the owner of an E-reader reads more books because of the ease and convenience of downloading a new title, then every book read on the device does not necessarily correspond to a printed book that is offset. Additionally the numbers in the figure above are based on a very simple comparison that is not likely to be replicated in the real world. The assumption is that the reader would either purchase a new printed book once and not share it with anyone else or that the reader would read the books on an E-reader and only use the E-reader for reading books. If a person would normally share a printed book with others, buy some used printed books, or borrow many of the printed books from the library then the numbers would need to be adjusted to account for that. Additionally, if the E-reader is used for other activities such as watching video, browsing the internet, checking email, or reading magazines and newspapers, it is unfair to assign the full impacts of producing the E-reader to E-books. More research is needed on typical user behavior in terms of time spent reading E-books verses other activities on E-readers and Tablet computers in order to make a more accurate comparison. If the trend of the iPad stealing market share from the Kindle continues, it seems likely that users will spend more time on the other activates that tablets like the iPad are optimized for. Additionally, if someone already owns a tablet computer or an E-reader, the marginal impact of downloading and reading an additional book is quite small. Thus for someone who already owns a device capable of reading E-books, the best choice from an environmental perspective would likely be to read a new book on that device."

View Map + Bookmark Entry

Koomey’s Law of Electrical Efficiency in Computing March 2011

Energy and environmental scientist Jonathan Koomey of Stanford University, and Stephen Berard, Maria Sanchez, and Henry Wong published “Implications of Historical Trends in the Electrical Efficiency of Computing” Annals of the History of Computing, 33, no. 3, 46-54. This historical paper was highly unusual for its enunciation of a predictive trend in computing technology labeled by the press as “Koomey’s Law.”

“Koomey’s law describes a long-term trend in the history of computing hardware. The number of computations per joule of energy dissipated has been doubling approximately every 1.57 years. This trend has been remarkably stable since the 1950s (R2 of over 98%) and has actually been somewhat faster than Moore’s law. Jon Koomey articulated the trend as follows: ‘at a fixed computing load, the amount of battery you need will fall by a factor of two every year and a half.’

Because of Koomey’s law, the amount of battery needed for a fixed computing load will fall by factor of 100 every decade. As computing devices become smaller and more mobile, this trend may be even more important than improvements in raw processing power for many applications. Furthermore, energy costs are becoming an increasingly important determinant of the economics of data centers, further increasing the importance of Koomey’s law” (Wikipedia article on Koomey's Law accessed 11-19-2011).

View Map + Bookmark Entry

The Impact of Automation on Legal Research March 4, 2011

"Armies of Expensive Lawyers Replaced by Cheaper Software," an article by John Markoff published in The New York Times, discussed the use of "e-discovery" (ediscovery) software which uses artificial intelligence to analyze millions of electronic documents from the linguistic, conceptual and sociological standpoint in a fraction of the time and at a fraction of the cost of the hundreds of lawyers previously required to do the task.

"These new forms of automation have renewed the debate over the economic consequences of technological progress.  

"David H. Autor, an economics professor at the Massachusetts Institute of Technology, says the United States economy is being 'hollowed out.' New jobs, he says, are coming at the bottom of the economic pyramid, jobs in the middle are being lost to automation and outsourcing, and now job growth at the top is slowing because of automation.  

" 'There is no reason to think that technology creates unemployment,' Professor Autor said. 'Over the long run we find things for people to do. The harder question is, does changing technology always lead to better jobs? The answer is no.'

"Automation of higher-level jobs is accelerating because of progress in computer science and linguistics. Only recently have researchers been able to test and refine algorithms on vast data samples, including a huge trove of e-mail from the Enron Corporation. 

“ 'The economic impact will be huge,' said Tom Mitchell, chairman of the machine learning department at Carnegie Mellon University in Pittsburgh. 'We’re at the beginning of a 10-year period where we’re going to transition from computers that can’t understand language to a point where computers can understand quite a bit about language.'

View Map + Bookmark Entry

The Impact of Artificial Intelligence and Automation on Jobs March 6, 2011

In an op-ed column called Degrees and Dollars published in The New York Times Nobel Prize winning economist Paul Krugman of Princeton wrote concerning the impact of artificial intelligence and automation on jobs:

"The fact is that since 1990 or so the U.S. job market has been characterized not by a general rise in the demand for skill, but by “hollowing out”: both high-wage and low-wage employment have grown rapidly, but medium-wage jobs — the kinds of jobs we count on to support a strong middle class — have lagged behind. And the hole in the middle has been getting wider: many of the high-wage occupations that grew rapidly in the 1990s have seen much slower growth recently, even as growth in low-wage employment has accelerated."

"Some years ago, however, the economists David Autor, Frank Levy and Richard Murnane argued that this was the wrong way to think about it. Computers, they pointed out, excel at routine tasks, “cognitive and manual tasks that can be accomplished by following explicit rules.” Therefore, any routine task — a category that includes many white-collar, nonmanual jobs — is in the firing line. Conversely, jobs that can’t be carried out by following explicit rules — a category that includes many kinds of manual labor, from truck drivers to janitors — will tend to grow even in the face of technological progress.  

"And here’s the thing: Most of the manual labor still being done in our economy seems to be of the kind that’s hard to automate. Notably, with production workers in manufacturing down to about 6 percent of U.S. employment, there aren’t many assembly-line jobs left to lose. Meanwhile, quite a lot of white-collar work currently carried out by well-educated, relatively well-paid workers may soon be computerized. Roombas are cute, but robot janitors are a long way off; computerized legal research and computer-aided medical diagnosis are already here.

"And then there’s globalization. Once, only manufacturing workers needed to worry about competition from overseas, but the combination of computers and telecommunications has made it possible to provide many services at long range. And research by my Princeton colleagues Alan Blinder and Alan Krueger suggests that high-wage jobs performed by highly educated workers are, if anything, more “offshorable” than jobs done by low-paid, less-educated workers. If they’re right, growing international trade in services will further hollow out the U.S. job market."

View Map + Bookmark Entry

Walmart Buys Kosmix.com, Forming @WalmartLabs April 18, 2011

Wal-Mart, the world’s largest retailer, agreed to buy Kosmix.com, a social media start-up focused on ecommerce, creating @WalmartLabs.

"Eric Schmidt famously observed that every two days now, we create as much data as we did from the dawn of civilization until 2003. A lot of the new data is not locked away in enterprise databases, but is freely available to the world in the form of social media: status updates, tweets, blogs, and videos.

"At Kosmix, we’ve been building a platform, called the Social Genome, to organize this data deluge by adding a layer of semantic understanding. Conversations in social media revolve around 'social elements' such as people, places, topics, products, and events. For example, when I tweet 'Loved Angelina Jolie in Salt,' the tweet connects me (a user) to Angelia Jolie (an actress) and SALT (a movie). By analyzing the huge volume of data produced every day on social media, the Social Genome builds rich profiles of users, topics, products, places, and events. The Social Genome platform powers the sites Kosmix operates today: TweetBeat, a real-time social media filter for live events; Kosmix.com, a site to discover content by topic; and RightHealth, one of the top three health and medical information sites by global reach. In March, these properties together served over 17.5 million unique visitors worldwide, who spent over 5.5 billion seconds on our services.

"Quite a few of us at Kosmix have backgrounds in ecommerce, having worked at companies such as Amazon.com and eBay. As we worked on the Social Genome platform, it became apparent to us that this platform could transform ecommerce by providing an unprecedented level of understanding about customers and products, going well beyond purchase data. The Social Genome enables us to take search, personalization and recommendations to the next level.

"That’s why we were so excited when Walmart invited us to share with them our vision for the future of retailing. Walmart is the world’s largest retailer, with 10.5 billion customer visits every year to their stores and 1.5 billion online – 1 in 10 customers around the world shop Walmart online, and that proportion is growing. More and more visitors to the retail stores are armed with powerful mobile phones, which they use both to discover products and to connect with their friends and with the world. It was very soon apparent that the Walmart leadership shared our vision and our enthusiasm. And so @WalmartLabs was born. . . .

"We are at an inflection point in the development of ecommerce. The first generation of ecommerce was about bringing the store to the web. The next generation will be about building integrated experiences that leverage the store, the web, and mobile, with social identity being the glue that binds the experience. Walmart’s enormous global reach and incredible scale of operations -- from the United States and Europe to growing markets like China and India -- is unprecedented. @WalmartLabs, which combines Walmart’s scale with Kosmix’s social genome platform, is in a unique position to invent and build this future" (http://walmartlabs.blogspot.com/search?updated-max=2011-11-30T21:01:00-08:00&max-results=7, accessed 01-20-2012).

View Map + Bookmark Entry

Microsoft Acquires Skype for $8.5 Billion May 2011

In its acquisition of Skype for $8.5 billion Microsoft acquired a company founded in 2003, which never made money, changed hands many times, and came with substantial debt. 

The purchase price was roughly ten times the $860 million revenue of the company in 2010. Skype's debt was $686 million — not a problem for Microsoft.

Microsoft paid such a premium for the company because at the time of purchase Skype was growing at the rate of 500,000 new registered users per day, had 170 million connected users, with 30 million users communicating on the Skype platform concurrently. Volume of communications over the platform totaled 209 billion voice and video minutes in 2010.

"Services like Skype can cut into the carriers’ revenues because they offer easy ways to make phone calls, videoconference and send messages free over the Internet, encroaching on the ways that phone companies have traditionally made money" (http://www.nytimes.com/2011/05/16/technology/16phone.html?hpw, accessed 05-16-2011).

View Map + Bookmark Entry

In May 2011 Netflix was the Largest Source of Internet Traffic in North America May 2011

In May 2011 video streaming company Netflix, headquartered in Los Gatos, California, was the largest source of Internet traffic in North America, accounting for 29.7 percent of peak downstream traffic. The company was also the largest overall source of Internet traffic.

"Currently, real-time entertainment applications consume 49.2 percent of peak aggregate traffic - up from 29.5 percent in 2009. And the company forecasts that the category will account for as much as 60 percent of peak aggregate traffic by the end of this year.

"And in Europe, the figure's even higher. Overall, individual subscribers in Europe consume twice the amount of data as North Americans" (http://www.tgdaily.com/games-and-entertainment-features/56015-netflix-becomes-biggest-source-of-internet-traffic, accessed 05-18-2011). 

View Map + Bookmark Entry

McKinsey Report on the Impact of the Internet on Growth, Jobs, and Prosperity May 2011

 McKinsey research into the Internet economies of the G-8 nations as well as Brazil, China, India, South Korea, and Sweden found that the web accounted for a significant and growing portion of global GDP. If measured as a sector, Internet-related consumption and expenditure were bigger than agriculture or energy. On average, the Internet contributed 3.4 percent to GDP in the 13 countries covered by the research—an amount the size of Spain or Canada in terms of GDP, and growing at a faster rate than that of Brazil.

"Research prepared by the McKinsey Global Institute and McKinsey's Technology, Media and Telecommunications Practices as part of a knowledge partnership with the e-G8 Forum, offers the first quantitative assessment of the impact of the Internet on GDP and growth, while also considering the most relevant tools governments and businesses can use to get the most benefit from the digital transformation. To assess the Internet's contribution to the global economy, the report analyzes two primary sources of value: consumption and supply. The report draws on a macroeconomic approach used in national accounts to calculate the contribution of GDP; a statistical econometric approach; and a microeconomic approach, analyzing the results of a survey of 4,800 small and medium-size enterprises in a number of different countries.  

"The Internet's impact on global growth is rising rapidly. The Internet accounted for 21 percent of GDP growth over the last five years among the developed countries MGI studied, a sharp acceleration from the 10 percent contribution over 15 years. Most of the economic value created by the Internet falls outside of the technology sector, with 75 percent of the benefits captured by companies in more traditional industries. The Internet is also a catalyst for job creation. Among 4,800 small and medium-size enterprises surveyed, the Internet created 2.6 jobs for each lost to technology-related efficiencies.

"The United States is the largest player in the global Internet supply ecosystem, capturing more than 30 percent of global Internet revenues and more than 40 percent of net income. It is also the country with the most balanced structure within the global ecosystem among the 13 countries studied, garnering relatively equal contributions from hardware, software and services, and telecommunications. The United Kingdom and Sweden are changing the game, in part driven by the importance and the performance of their telecom operators. India and China are strengthening their position in the global Internet ecosystem rapidly with growth rates of more than 20 percent. France, Canada, and Germany have an opportunity to leverage their strong Internet usage to increase their presence in the supply ecosystem. Other Asian countries are rapidly accelerating their influence on the Internet economy at faster rates than Japan. Brazil, Russia and Italy are in the early stages of Internet supply. They have strong potential for growth" (http://www.mckinsey.com/Insights/MGI/Research/Technology_and_Innovation/Internet_matters, accessed 01-19-2012).

View Map + Bookmark Entry

The Expanding Digital Universe: Surpassing 1.8 Zetabytes June 2011

John F. Gantz and David Reinsell of International Data Corporation (IDC) published a summary of their annual study of the digital universe on the fifth anniversary of their study:

"We always knew it was big – in 2010 cracking the zettabyte barrier. In 2011, the amount of information created and replicated will surpass 1.8 zettabytes (1.8 trillion gigabytes) - growing by a factor of 9 in just five years.

"But, as digital universe cosmologists, we have also uncovered a number of other things — some predictable, some astounding, and some just plain disturbing.

"While 75% of the information in the digital universe is generated by individuals, enterprises have some liability for 80% of information in the digital universe at some point in its digital life. The number of "files," or containers that encapsulate the information in the digital universe, is growing even faster than the information itself as more and more embedded systems pump their bits into the digital cosmos. In the next five years, these files will grow by a factor of 8, while the pool of IT staff available to manage them will grow only slightly. Less than a third of the information in the digital universe can be said to have at least minimal security or protection; only about half the information that should be protected is protected.

"The amount of information individuals create themselves — writing documents, taking pictures, downloading music, etc. — is far less than the amount of information being created about them in the digital universe.

"The growth of the digital universe continues to outpace the growth of storage capacity. But keep in mind that a gigabyte of stored content can generate a petabyte or more of transient data that we typically don't store (e.g., digital TV signals we watch but don't record, voice calls that are made digital in the network backbone for the duration of a call).  

"So, like our physical universe, the digital universe is something to behold — 1.8 trillion gigabytes in 500 quadrillion "files" — and more than doubling every two years. That's nearly as many bits of information in the digital universe as stars in our physical universe" (http://idcdocserv.com/1142, accessed 08-09-2011).

♦ In August 2011 a video presentation of John Gantz delivering his summary speech was available at this link: http://www.emc.com/collateral/demos/microsites/emc-digital-universe-2011/index.htm

View Map + Bookmark Entry

FaceBook Serves a Trillion Page Views in June 2011 June 2011

According to Google's doubleclick ad planner list of The 1000 most visited sites on the web, Facebook, the most visited website in the world, served 1 trillion page views to 860,000,000 unique visitors in June 2011.

View Map + Bookmark Entry

Digital Democracy is Not So Democratic June 10, 2011

"Anyone with Internet access can generate online content and influence public opinion, according to popular belief. But a new study from the University of California, Berkeley, suggests that the social Web is becoming more of a playground for the affluent than a digital democracy.

"Despite the proliferation of social media – with Twitter and Facebook touted as playing pivotal roles in such pro-democracy movements as the Arab Spring – the bulk of today’s blogs, websites and video-sharing sites represent the perspectives of college-educated, Web 2.0-savvy users, the study says.

“ 'Having Internet access is not enough. Even among people online, those who are digital producers are much more likely to have higher incomes and educational levels,' said Jen Schradie, a doctoral candidate in sociology at UC Berkeley and author of the study published in the May online issue of Poetics, a Journal of Empirical Research on Culture, the Media and the Arts. 

"Schradie, a researcher at the campus’s Berkeley Center for New Media, analyzed data from more than 41,000 American adults surveyed between 2000 and 2008 in the Pew Internet and American Life Project. She found that college graduates are 1.5 times more likely to be bloggers than are high school graduates; twice as likely to post photos and videos and three times more likely to post an online rating or comment.  

"Overall, the study found, less than 10 percent of the U.S. population is participating in most online production activities, and having a college degree is a greater predictor of who will generate publicly available online content than being young and white" (http://newscenter.berkeley.edu/2011/06/07/digital-democracy/, accessed 0612-2011).

♦ You can watch a video presentation by Jen Schradie on The Digital Production Gap on YouTube at this link: http://www.youtube.com/watch?v=-029CXbeOjY

 

View Map + Bookmark Entry

News Corporation Sells MySpace for $545 Million Loss June 29, 2011

News Corporation sold social media website MySpace to advertising network Specific Media for "roughly $35 million." New Corporation purchased MySpace in 2006 for $580 million.

"The News Corporation, which is controlled by Rupert Murdoch, had been trying since last winter to rid itself of the unprofitable unit, which was a casualty of changing tastes and may be a cautionary tale for social companies like Zynga and LinkedIn that are currently enjoying sky-high valuations. . . .

"Terms of the deal were not disclosed, but the News Corporation said that it would retain a minority stake. Specific Media said it had brought on board the artist Justin Timberlake as a part owner and an active player in MySpace’s future, but said little else about how the site would change.  

"The sale closes a complex chapter in the history of the Internet and of the News Corporation, which was widely envied by other media companies when it acquired MySpace in 2005. At that time, MySpace was the world’s fastest-growing social network, with 20 million unique visitors each month in the United States. That figure soon soared to 70 million, but the network could not keep pace with Facebook, which overtook MySpace two years ago" (http://mediadecoder.blogs.nytimes.com/2011/06/29/news-corp-sells-myspace-to-specific-media-for-35-million/?hp, accessed 06-30-2011).

View Map + Bookmark Entry

How Search Engines Have Become a Primary Form of External or Transactive Memory July 14, 2011

Betsy Sparrow of Columbia University, Jenny Liu, and Daniel M. Wegner of Harvard University published "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips," published online 14 July 2011, Science 5 August 2011: Vol. 333 no. 6043 pp. 776-778 DOI: 10.1126/science.1207745.

Abstract: 

"The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves."

First two paragraphs (footnotes removed):

"In a development that would have seemed extraordinary just over a decade ago, many of us have constant access to information. If we need to find out the score of a ball game, learn how to perform a complicated statistical test, or simply remember the name of the actress in the classic movie we are viewing, we need only turn to our laptops, tablets, or smartphones and we can find the answers immediately. It has become so commonplace to look up the answer to any question the moment it occurs that it can feel like going through withdrawal when we can’t find out something immediately. We are seldom offline unless by choice, and it is hard to remember how we found information before the Internet became a ubiquitous presence in our lives. The Internet, with its search engines such as Google and databases such as IMDB and the information stored there, has become an external memory source that we can access at any time.

"Storing information externally is nothing particularly novel, even before the advent of computers. In any long-term relationship, a team work environment, or other ongoing group, people typically develop a group or transactive memory (1), a combination of memory stores held directly by individuals and the memory stores they can access because they know someone who knows that information. Like linked computers that can address each other’s memories, people in dyads or groups form transactive memory systems (2, 3). The present research explores whether having online access to search engines, databases, and the like, has become a primary transactive memory source in itself. We investigate whether the Internet has become an external memory system that is primed by the need to acquire information. If asked the question whether there are any countries with only one color in their flag, for example, do we think about flags or immediately think to go online to find out? Our research then tested whether, once information has been accessed, our internal encoding is increased for where the information is to be found rather than for the information itself."

An article by Alexander Bloom published in Harvard Magazine, November 2011 had this to say regarding the research:

"Wegner, the senior author of the study, believes the new findings show that the Internet has become part of a transactive memory source, a method by which our brains compartmentalize information. First hypothesized by Wegner in 1985, transactive memory exists in many forms, as when a husband relies on his wife to remember a relative’s birthday. '[It is] this whole network of memory where you don’t have to remember everything in the world yourself,' he says. 'You just have to remember who knows it.' Now computers and technology as well are becoming virtual extensions of our memory. The idea validates habits already forming in our daily lives. Cell phones have become the primary location for phone numbers. GPS devices in cars remove the need to memorize directions. Wegner points out that we never have to stretch our memories too far to remember the name of an obscure movie actor or the capital of Kyrgyzstan—we just type our questions into Google. 'We become part of the Internet in a way,' he says. 'We become part of the system and we end up trusting it.' "(http://harvardmagazine.com/2011/11/how-the-web-affects-memory, accessed 12-11-2011).

View Map + Bookmark Entry

Google Acquires Smart-Phone Maker Motorola Mobility; Sells its Hardware Division in January 2014 August 15, 2011 – January 2014

On August 15, 2011 Google announced that it agreed to acquire the smart-phone manufacturer Motorola Mobility, headquarted in Libertyville, Illinois, for $12,5 billion. This was Google's largest acquisition to date.

"In a statement, Google said the deal was largely driven by the need to acquire Motorola's patent portfolio, which it said would help it defend Android against legal threats from competitors armed with their own patents. This issue has come to the fore since a consortium of technology companies led by Apple and Microsoft purchased more than 6,000 mobile-device-related patents from Nortel Networks for about $4.5 billion, in early July. Battle lines are being drawn around patents, as companies seek to protect their interests in the competitive mobile industry through litigation as well as innovation.  

"However, as people increasingly access the Web via mobile devices, the acquisition could also help Google remain central to their Web experience in the years to come. As Apple has demonstrated with its wildly popular iPhone, this is far easier to achieve if a company can control the hardware, as well as the software, people carry in their pockets. Comments made by Google executives hint that Motorola could also play a role in shaping the future of the Web in other areas—for instance, in set-top boxes. Motorola is by far Google's largest acquisition, and it takes the company into uncertain new territory. The deal is also likely to draw antitrust scrutiny because of the reach Google already has with Android, which runs on around half of all smart phones in the United States.  

"Motorola, which makes the Droid smart phone, went all-in with Google's Android platform in 2008, declaring that all of its devices would use the open-source mobile operating system.  

"Before his departure as Google CEO, Eric Schmidt had begun pressing Google employees to shift their attention to mobile. Cofounder and new CEO Larry Page seems determined to maintain this change of focus. In a conference call this morning, he told investors, 'It's no secret that Web usage is increasingly shifting to mobile devices, a trend I expect to continue. With mobility continuing to take center stage in the computing revolution, the combination with Motorola is an extremely important event in Google's continuing evolution that will drive a lot of improvements in our ability to deliver great user experiences.' " (http://www.technologyreview.com/web/38320/?nlid=nldly&nld=2011-08-16, accessed 08-17-2011).

On January 29, 2014 Larry Page, CEO of Google published in the Google Official Blog that they were selling Motorola's handset division for a multi-billion dollar loss:

"We’ve just signed an agreement to sell Motorola to Lenovo for $2.91 billion. As this is an important move for Android users everywhere, I wanted to explain why in detail. 

"We acquired Motorola in 2012 to help supercharge the Android ecosystem by creating a stronger patent portfolio for Google and great smartphones for users. Over the past 19 months, Dennis Woodside and the Motorola team have done a tremendous job reinventing the company. They’ve focused on building a smaller number of great (and great value) smartphones that consumers love. Both the Moto G and the Moto X are doing really well, and I’m very excited about the smartphone lineup for 2014. And on the intellectual property side, Motorola’s patents have helped create a level playing field, which is good news for all Android’s users and partners.

"But the smartphone market is super competitive, and to thrive it helps to be all-in when it comes to making mobile devices. It’s why we believe that Motorola will be better served by Lenovo—which has a rapidly growing smartphone business and is the largest (and fastest-growing) PC manufacturer in the world. This move will enable Google to devote our energy to driving innovation across the Android ecosystem, for the benefit of smartphone users everywhere. As a side note, this does not signal a larger shift for our other hardware efforts. The dynamics and maturity of the wearable and home markets, for example, are very different from that of the mobile industry. We’re excited by the opportunities to build amazing new products for users within these emerging ecosystems.

"Lenovo has the expertise and track record to scale Motorola into a major player within the Android ecosystem. They have a lot of experience in hardware, and they have global reach. In addition, Lenovo intends to keep Motorola’s distinct brand identity—just as they did when they acquired ThinkPad from IBM in 2005. Google will retain the vast majority of Motorola’s patents, which we will continue to use to defend the entire Android ecosystem."

View Map + Bookmark Entry

Free Online Artificial Intelligence Course Attracts 58,000 Students August 15, 2011

Sebastian Thrun, Research Professor Computer Science at Stanford and a leading roboticist, and Peter Norvig, Director of Research at Google, Inc., in partnership with the Stanford University School of Engineering, offered a free online course entitled An Introduction to Artificial Intelligence

According to an article by John Markoff in The New York Times, by August 15, 2011 more than 58,000 students from around the world registered for this free course— nearly four times Stanford's entire student body.

"The online students will not get Stanford grades or credit, but they will be ranked in comparison to the work of other online students and will receive a 'statement of accomplishment.'

"For the artificial intelligence course, students may need some higher math, like linear algebra and probability theory, but there are no restrictions to online participation. So far, the age range is from high school to retirees, and the course has attracted interest from more than 175 countries" (http://www.nytimes.com/2011/08/16/science/16stanford.html?hpw, accessed 08-16-2011).

One fairly obvious reason why so many studients signed up is that Norvig is famous in the field as the co-author with Stuart Russell of the standard textbook on AI, Artificial Intelligence: A Modern Approach (first edition: 1995), which has been translated into many languages and has sold over 200,000 copies.

View Map + Bookmark Entry

Toward Cognitive Computing Systems August 18, 2011

On August 18, 2011 "IBM researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. The technology could yield many orders of magnitude less power consumption and space than used in today’s computers. 

"In a sharp departure from traditional concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry. Its first two prototype chips have already been fabricated and are currently undergoing testing.  

"Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember – and learn from – the outcomes, mimicking the brains structural and synaptic plasticity.  

"To do this, IBM is combining principles from nanoscience, neuroscience and supercomputing as part of a multi-year cognitive computing initiative. The company and its university collaborators also announced they have been awarded approximately $21 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

"The goal of SyNAPSE is to create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment – all while rivaling the brain’s compact size and low power usage. The IBM team has already successfully completed Phases 0 and 1.  

" 'This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century,' said Dharmendra Modha, project leader for IBM Research. 'Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture. These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government.' " (http://www-03.ibm.com/press/us/en/pressrelease/35251.wss, accessed 08-21-2011).

View Map + Bookmark Entry

Snapchat: Communication and Automatic Destruction of Information September 2011

In September 2011 Stanford University students Evan Spiegel and Robert Murphy produced the initial release of the photo messaging application Snapchat, famously launching the program "from Spiegel's father's living room." Users of the app take photos, record videos, add text and drawings, and send them to a controlled list of recipients. Photographs and videos sent through the app are known as "Snaps". Users set a time limit for how long recipients can view their Snaps, after which the photos or videos are hidden from the recipient's device and deleted from Snapchat's servers. In December 2013 the range was from 1 to 10 seconds. 

In November 2013 it was reported that Snapchat was sharing 400 million photos per day—more than Facebook.

"Founder Evan Spiegel explained that Snapchat is intended to counteract the trend of users being compelled to manage an idealized online identity of themselves, which he says has "taken all of the fun out of communicating". Snapchat can locate a user's friends through the user's smartphone contact list. Research conducted in the UK has shown that, as of June 2013, half of all 18 to 30-year-old respondents (47 percent) have received nude pictures, while 67 percent had received images of "inappropriate poses or gestures".

"Snapchat launched the "Snapchat Stories" feature in early October 2013 and released corresponding video advertisements with the tagline "It's about time." The feature allows users to create links of shared content that can be viewed an unlimited number of times over a 24-hour period. The "stories" are simultaneously shared with the user's friends and content remains for 24 hours before disappearing.

"Another controversy surrounding the rising popularity of Snapchat in the United States relates to a phenomenon known as sexting. This involves the sending and receiving of explicit images that often involve some degree of nudity. Because the application is commonly used by younger generations, often below the age of eighteen, the question has been raised whether or not certain users are technically distributing child pornography. For this reason, many adults disapprove of their children's use of the application. Snapchat's developers continue to insist that the application is not sexting-friendly and that they do not condone any kind of pornographic use.

"On November 14, 2013, police in LavalQuebec, Canada arrested 10 boys aged 13 to 15 on child pornography charges after the boys allegedly captured and shared explicit photos of teenage girls sent through Snapchat as screenshots.

"In February 2013, a study by market research firm Survata found that mobile phone users are more likely to "sext over SMS than over Snapchat" (Wikipedia article on Snapchat, accessed 12-12-2013).

View Map + Bookmark Entry

Michael Hart, Father of eBooks & Founder of Project Gutenberg, Dies September 6, 2011

"AMONG the episodes in his life that didn’t last, that were over almost before they began, including a spell in the army and a try at marriage, Michael Hart was a street musician in San Francisco. He made no money at it, but then he never bought into the money system much—garage-sale T-shirts, canned beans for supper, were his sort of thing. He gave the music away for nothing because he believed it should be as freely available as the air you breathed, or as the wild blackberries and raspberries he used to gorge on, growing up, in the woods near Tacoma in Washington state. All good things should be abundant, and they should be free.  

"He came to apply that principle to books, too. Everyone should have access to the great works of the world, whether heavy (Shakespeare, 'Moby-Dick', pi to 1m places), or light (Peter Pan, Sherlock Holmes, the 'Kama Sutra'). Everyone should have a free library of their own, the whole Library of Congress if they wanted, or some esoteric little subset; he liked Romanian poetry himself, and Herman Hesse’s 'Siddhartha'. The joy of e-books, which he invented, was that anyone could read those books anywhere, free, on any device, and every text could be replicated millions of times over. He dreamed that by 2021 he would have provided a million e-books each, a petabyte of information that could probably be held in one hand, to a billion people all over the globe—a quadrillion books, just given away. As powerful as the Bomb, but beneficial.

"That dream had grown from small beginnings: from him, a student at the University of Illinois in Urbana, hanging round a huge old mainframe computer on the night of the Fourth of July in 1971, with the sound of fireworks still in his ears. The engineers had given him by his reckoning $100m-worth of computer time, in those infant days of the internet. Wondering what to do, ferreting in his bag, he found a copy of the Declaration of Independence he had been given at the grocery store, and a light-bulb pinged on in his head. Slowly, on a 50-year-old Teletype machine with punched-paper tape, he began to bang out 'When in the Course of human events…'  

"This was the first free e-text, and none better as a declaration of freedom from the old-boy network of publishing. What he typed could not even be sent as an e-mail, in case it crashed the ancient Arpanet system; he had to send a message to say that it could be downloaded. Six people did, of perhaps 100 on the network. It was followed over years by the Gettysburg Address, the Constitution and the King James Bible, all arduously hand-typed, full of errors, by Mr Hart. No one particularly noticed. He mended people’s hi-fis to get by. Then from 1981, with a growing band of volunteer helpers scanning, rather than typing, a flood of e-texts gathered. By 2011 there were 33,000, accumulating at a rate of 200 a month, with translations into 60 languages, all given away free. No wonder money-oriented rivals such as Google and Yahoo! sprang up all round as the new century dawned, claiming to have invented e-books before him. He called his enterprise Project Gutenberg. This was partly because Gutenberg with his printing press had put wagonloads of books within the reach of people who had never read before; and also because printing had torn down the wall between haves and have-nots, literate and illiterate, rich and poor, until whole power-structures toppled. Mr Hart, for all his burly, hippy affability, was a cyber-revolutionary, with a snappy list of the effects he expected e-books to have:

Books prices plummet.

Literacy rates soar.

Education rates soar.

Old structures crumble, as did the Church.

Scientific Revolution.

Industrial Revolution.

Humanitarian Revolution.

"If all these upheavals were tardier than he hoped, it was because of the Mickey Mouse copyright laws. Every time men found a speedier way to spread information to each other, government made it illegal. During the lifetime of Project Gutenberg alone, the average time a book stayed in copyright in America rose from 30 to almost 100 years. Mr Hart tried to keep out of trouble, posting works that were safely in the public domain, but chafed at being unable to give away books that were new, and fought all copyright extensions like a tiger. “Unlimited distribution” was his mantra. Give everyone everything! Break the bars of ignorance down!

"The power of plain words

"He lived without a mobile phone, in a chaos of books and wiring. The computer hardware in his basement, from where he kept an unbossy watch over the whole project, often not bothering to pick up his monthly salary, was ten years old, and the software 20. Simple crowdsourcing was his management style, where people scanned or keyed in works they loved and sent them to him. Project Gutenberg books had a frugal look, with their Plain Vanilla ASCII format, which might have been produced on an old typewriter; but then it was content, not form, that mattered to Mr Hart. These were great thoughts, and he was sending them to people everywhere, available to read at the speed of light, and free as the air they breathed." (http://www.economist.com/node/21530075, accessed 09-27-2011).

♦ For another obituary of Michael Hart, of Urbana, Illinois, I recommend that in Brewster Kahle's Blog, post of September 7, 2011.

View Map + Bookmark Entry

Amazon Introduces the Kindle Fire September 28 – November 14, 2011

On September 28, 2011 Amazon announced the Kindle Fire, a tablet computer version of Amazon.com's Kindle e-book reader, with a  7" color multi-touch display with IPS technology, running a forked version of Google's Android operating system. The device, which included access to the Amazon Appstore, streaming movies and TV shows, and Kindle's e-books, was released on November 14, 2011 for $199.

In January 2012 Amazon advertised that there were 19 million movies, TV shows, songs, magazines, and books available for the Kindle Fire.

View Map + Bookmark Entry

Steve Jobs Dies October 5, 2011

Steve Jobs, one of the most influential and daring innovators in the history of media, and arguably the most innovative and influential figure in the computer industry since the development of the personal computer, died at the age of 55 after a well-publicized battle with pancreatic cancer. Responsible, as inspirational leader, for building the first commercially successful personal computer (Apple II), for developing and popularizing the graphical user interface (Macintosh) which made personal computers user friendly, for developing desktop publishing, for making music truly portable (iPod, iTunes), for bringing all the elements of the personal computer to cell phones (iPhone), for causing the widespread acceptance of tablet computers (iPad), Jobs not only rescued Apple Computer from near failure and made it for a time the most valuable company in the S&P 500, but also achieved great success through his ownership of Pixar Animation Studios, which he eventually sold to The Walt Disney Company. Characteristics of Jobs' style were exceptional boldness in the conception of products, high quality and ease of use, and elegance of industrial design.

"Mr. Jobs even failed well. NeXT, a computer company he founded during his years in exile from Apple, was never a commercial success. But it was a technology pioneer. The World Wide Web was created on a NeXT computer, and NeXT software is the core of Apple’s operating systems today" (http://www.nytimes.com/2011/10/09/business/steve-jobs-and-the-power-of-taking-the-big-chance.html?hp).

An article published in The New York Times on October 8, 2011 compared and contrasted the lives and achievements of Steve Jobs with that earlier great American inventor and innovator, Thomas Alva Edison.

View Map + Bookmark Entry

"Zero to Eight: Children's Media Use in America" October 25, 2011

On October 25, 2011 Common Sense Media of San Francisco issued Zero to Eight: Children's Media Use in America by Vicky Rideout. Some of the key findings of their report were:

"Even very young children are frequent digital media users.

"MOBILE MEDIA. Half (52%) of all children now have access to one of the newer mobile devices at home: either a smartphone (41%) a video iPod (21%), or an iPad or ther tablet device (8%). More than a quarter (29%) of all parents have downloaded 'apps'. . . for their children to use. And more than a third (36%) of children have ever used one of these new mobile devices, including 10% of 0-to 1-year-olds, 39% of 2-to 4-year-olds, and 52% of 5- to 8-year-olds. In a typical day 11% of all 0-to 8 year-year olds use a cell phone, iPod, iPad, or similar device for media consumption and those who do spend an average of :43 doing so.  

"COMPUTERS. Computer use is pervasive among very young children, with half (53%) of all 2- to 4-year-olds having ever used a computer, and nine out of ten (90%) 5- to 8-year-olds having done so. For many of these children, computer use is a regular occurrence: 22% of 5 to 8-year olds use a computer at least once a day and another 46% use it at least once a week. Even among 2- to 4-year-olds, 12% use a computer every day, with another 24% doing so at least once a week. Among all children who have used a computer, the average age of first use was just 3 1/2 years old.

"VIDEO GAMES. Playing console video games is also popular among these young children: Half (51%) of all 0- to 8-year-olds have ever played a console video game, including 44% of 2- to 4-year-olds and
81% of 5- to 8-year-olds. Among those who have played console video games, the average age at first use was just under 4 years old (3 years and 11 months). Among 5- to 8-year-olds, 17% play console
video games at least once a day, and another 36% play them at least once a week. . . .

"Children under 2 spend twice as much time watching
TV and videos as they do reading books.

"In a typical day, 47% of babies and toddlers ages 0 through 1 watch TV or DVDs, and those who do watch spend an average of nearly two hours (1:54) doing so. This is an average of :53 among all children
in this age group, compared to an average of :23 a day reading or being read to. Nearly one in three (30%) has a TV in their bedroom. In 2005, among children ages 6-23 months, 19% had a TV in their
bedroom. Looking just at 6- to 23-month-olds in the current study, 29% have a TV in their bedroom. . . .

"Media use varies significantly by race and socio-economic status, but not much by gender.

"RACE AND SOCIO-ECONOMIC STATUS. African- American children spend an average of 4:27 a day with media (including music, reading, and screen media), compared to 2:51 among white children and 3:28 among Hispanics. Children from higher- income families or with more highly educated parents spend less time with media than other children do (for example, 2:47 a day among higher-income children vs. 3:34 among lower-income youth). Twenty percent of children in upper income homes have a TV in their bedroom, compared to 64% of those from lower- income homes. 

"GENDER. The only substantial difference between boys’ and girls’ media use is in console video games. Boys are more likely to have ever played a console video game than girls are (56% vs. 46%), to have a video game player in their bedroom (14% vs. 7%), and to play console video games every day (14% vs. 5%). Boys average :16 a day playing console games, compared to an average of :04 a day for girls."

View Map + Bookmark Entry

Digital Books Represent 25% of Sales of Some Categories of Books but Less than 5% of Childrens' Books November 20, 2011

An article entitled "For Their Children, Many E-Book Fans Insist on Paper," published in The New York Times, suggested that many parents, including those highly sophisticated with computing and the Internet, believe that children learn to read most efficiently from physical books because interacting with the physical object continues to have value beyond content alone, especially, it is believed, in developmental stages of reading, and in learning the "reading habit." Electronic books and e-book readers, they believe, represent distractions for young children. 

"As the adult book world turns digital at a faster rate than publishers expected, sales of e-books for titles aimed at children under 8 have barely budged. They represent less than 5 percent of total annual sales of children’s books, several publishers estimated, compared with more than 25 percent in some categories of adult books" (http://www.nytimes.com/2011/11/21/business/for-their-children-many-e-book-readers-insist-on-paper.html?hp, accessed 11-20-2011)

View Map + Bookmark Entry

Rapid Growth of the Digital Textbook Market in the U.S. November 23, 2011

"According to the Student Monitor, a private student market research company based in [Ridgewood] New Jersey, about 5 percent of all textbooks acquired in the autumn in the United States were digital textbooks. That is more than double the 2.1 percent of the spring semester.  

"Simba Information, a research company specializing in publishing, estimates that electronic textbooks will generate $267.3 million this year in sales in the United States. That is a rise of 44.3 percent over last year. The American Association of Publishers estimates that the college textbooks industry generated a total of $4.58 billion in sales last year.

"Kathy Micky, a senior analyst at Simba, said digital textbooks were expected 'to be the growth driver for the industry in the future.' Her company estimates that by 2013, digital textbooks will make up 11 percent of the textbook market revenue" (http://www.nytimes.com/2011/11/24/world/americas/schoolwork-gets-swept-up-in-rush-to-go-digital.html?hpw, accessed 11-25-2011).

View Map + Bookmark Entry

Google Maps 6.0 for Android Introduces Indoor Maps and a "My Location" Feature November 29, 2011

“ 'Where am I?' and 'What's around me?' are two questions that cartographers, and Google Maps, strive to answer. With Google Maps’ 'My Location' feature, which shows your location as a blue dot, you can see where you are on the map to avoid walking the wrong direction on city streets, or to get your bearings if you’re hiking an unfamiliar trail. Google Maps also displays additional details, such as places, landmarks and geographical features, to give you context about what’s nearby. And now, Google Maps for Android enables you to figure out where you are and see where you might want to go when you’re indoors.

"When you’re inside an airport, shopping mall or retail store, a common way to figure out where you are is to look for a freestanding map directory or ask an employee for help. Starting today, with the release of Google Maps 6.0 for Android, that directory is brought to the palm of your hands, helping you determine where you are, what floor you're on, and where to go indoors.

"Detailed floor plans automatically appear when you’re viewing the map and zoomed in on a building where indoor map data is available. The familiar 'blue dot' icon indicates your location within several meters, and when you move up or down a level in a building with multiple floors, the interface will automatically update to display which floor you’re on. All this is achieved by using an approach similar to that of ‘My Location’ for outdoor spaces, but fine tuned for indoors." (http://googleblog.blogspot.com/2011/11/new-frontier-for-google-maps-mapping.html, accessed. 12-1-2011)

View Map + Bookmark Entry

More than 10 Billion Android Apps Downloaded December 6, 2011

According to the Official Google Blog, app downloads from the Android Market at the beginning of December 2011 exceeded 10 billion downloads, with a growth rate of one billion app downloads per month.

View Map + Bookmark Entry

100 Million Words Translated per Week by Google Translate December 8, 2011

According to an infographic released by Google, in December 2011 100 million words in 200 different languages were translated weekly by Google Translate. 

View Map + Bookmark Entry

More than One Trillion Videos Were Played Back on YouTube in 2011 December 20, 2011

"In total, there were more than 1,000,000,000,000 (one trillion) playbacks on YouTube this year (yep, count ‘em, 12 zeroes). That’s about 140 views for every person on the earth. More than twice as many stars as in the Milky Way. And if I had a penny for every … OK, you get my drift, it’s a big number" (http://googleblog.blogspot.com/2011/12/what-were-we-watching-this-year-lets.html, accessed 12-20-2011).

View Map + Bookmark Entry

2012 – 2016

Sales of eBook Readers in 2011 January 5, 2012

"In 2011, manufacturers shipped about 30 million e-book readers over all, up 108 percent from 2010. . . .  

"Then in 2015, the reader market will shrink to 38 million, presumably because consumers will be attracted to tablets.  

"Sales of touch-screen tablets have continued to be strong. Apple’s iPad shipped upward of 40 million units in 2011 alone, according to estimates by Forrester, a research firm.  

"Amazon and Barnes & Noble are blurring the lines between e-readers and tablets with the Kindle Fire and the Nook Tablet. Forrester Research estimates that in the fourth quarter of 2011, Amazon shipped about five million units of the Kindle Fire and Barnes & Noble shipped about two million Tablets" (http://www.nytimes.com/2012/01/06/technology/nook-from-barnes-noble-gains-more-e-book-readers.html?src=rechp, accessed 01-05-2012).

View Map + Bookmark Entry

Transforming Google into a Search Engine that Understands Not Only Content but People and Relationships January 10, 2012

"We’re transforming Google into a search engine that understands not only content, but also people and relationships. We began this transformation with Social Search, and today we’re taking another big step in this direction by introducing three new features:  

"1. Personal Results, which enable you to find information just for you, such as Google+ photos and posts—both your own and those shared specifically with you, that only you will be able to see on your results page;  

"2. Profiles in Search, both in autocomplete and results, which enable you to immediately find people you’re close to or might be interested in following; and, 

"3. People and Pages, which help you find people profiles and Google+ pages related to a specific topic or area of interest, and enable you to follow them with just a few clicks. Because behind most every query is a community. 

"Together, these features combine to create Search plus Your World. Search is simply better with your world in it, and we’re just getting started" (http://googleblog.blogspot.com/2012/01/search-plus-your-world.html, accessed 01-11-2010).

View Map + Bookmark Entry

Technological Unemployment: Are Robots Replacing Workers? January 23, 2012 – January 13, 2013

On January 23, 2012 Erik Brynjolfsson and Andrew McAfee of MIT issued Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy

Drawing on research by their team at the Center for Digital Business at MIT, the authors show that digital technologies are rapidly encroaching on skills that used to belong to humans alone.

"This phenomenon is both broad and deep, and has profound economic implications. Many of these implications are positive; digital innovation increases productivity, reduces prices (sometimes to zero), and grows the overall economic pie.

"But digital innovation has also changed how the economic pie is distributed, and here the news is not good for the median worker. As technology races ahead, it can leave many people behind. Workers whose skills have been mastered by computers have less to offer the job market, and see their wages and prospects shrink. Entrepreneurial business models, new organizational structures and different institutions are needed to ensure that the average worker is not left behind by cutting-edge machines.

"In Race Against the Machine Brynjolfsson and McAfee bring together a range of statistics, examples, and arguments to show that technological progress is accelerating, and that this trend has deep consequences for skills, wages, and jobs. The book makes the case that employment prospects are grim for many today not because there's been technology has stagnated, but instead because we humans and our organizations aren't keeping up."

About a year later, on January 13, 2013, CBS television's 60 Minutes broadcast a report on automation in the workplace taking the viewpoint expressed in Brynjolfsson and McAfee's book entitled "Are robots hurting job growth?" (accessed 01-27-2013).

Following up on the issue, on January 23, 2013 John Markoff published an article in The New York Times entitled "Robot Makers Spread Global Gospel of Automation." Markoff reported that Henrik I. Christensen, the Kuka Chair of Robotics at Georgia Institue of Technology's College of Computing was highly critical of the 60 Minutes report.

"During his talk, Dr. Christensen said that the evidence indicated that the opposite was true. While automation may transform the work force and eliminate certain jobs, it also creates new kinds of jobs that are generally better paying and that require higher-skilled workers.

" 'We see today that the U.S. is still the biggest manufacturing country in terms of dollar value,' Dr. Christensen said. 'It’s also important to remember that manufacturing produces more jobs in associated areas than anything else.'

"An official of the International Federation of Robotics acknowledged that the automation debate had sprung back to life in the United States, but he said that America was alone in its anxiety over robots and automation.

 'This is not happening in either Europe or Japan,' said Andreas Bauer, chairman of the federation’s industrial robot suppliers group and an executive at Kuka Robotics, a German robot maker.

"To buttress its claim that automation is not a job killer but instead a way for the United States to compete against increasingly advanced foreign competitors, the industry group reported findings on Tuesday that it said it would publish in February. The federation said the industry would directly and indirectly create from 1.9 million to 3.5 million jobs globally by 2020.

"The federation held a news media event at which two chief executives of small American manufacturers described how they had been able to both increase employment and compete against foreign companies by relying heavily on automation and robots.

“ 'Automation has allowed us to compete on a global basis. It has absolutely created jobs in southwest Michigan,' said Matt Tyler, chief executive of Vickers Engineering, an auto parts supplier. 'Had it not been for automation, we would not have beat our Japanese competitor; we would not have beat our Chinese competitor; we would not have beat our Mexican competitor. It’s a fact.'

Also making the case was Drew Greenblatt, the widely quoted president and owner of Marlin Steel, a Baltimore manufacturer of steel products that has managed to expand and add jobs by deploying robots and other machines to increase worker productivity.

“ 'In December, we won a job from a Chicago company that for over a decade has bought from China,' he said. 'It’s a sheet-metal bracket; 160,000 sheet-metal brackets, year in, year out. They were made in China, now they’re made in Baltimore, using steel from a plant in Indiana and the robot was made in Connecticut.'

"A German robotics engineer argued that automation was essential to preserve jobs and also vital to make it possible for national economies to support social programs.

“ 'Countries that have high productivity can afford to have a good social system and a good health system,' said Alexander Verl, head of the Fraunhofer Institute for Manufacturing Engineering in Germany. “You see that to some extent in Germany or in Sweden. These are countries that are highly automated but at the same time they spend money on elderly care and the health system.'

"In the report presented Tuesday by the federation, the United States lags Germany, South Korea and Japan in the density of manufacturing robots employed (measured as the number of robots per 10,000 human workers). South Korea, in particular, sharply increased its robot-to-worker ratio in the last three years and Germany has twice the robot density as the United States, according to a presentation made by John Dulchinos, a board member of the Robot Industries Association and the chief executive of Adept Technology, a Pleasanton, Calif., maker of robots. 

"The report indicates that although China and Brazil are increasing the number of robots in their factories, they still trail the advanced manufacturing countries.  

"Mr. Dulchinos said that the United States had only itself to blame for the decline of its manufacturing sector in the last decade.

“ 'I can tell you that in the late 1990s my company’s biggest segment was the cellular phone market,' he said. 'Almost overnight that industry went away, in part because we didn’t do as good a job as was required to make that industry competitive.'

"He said that if American robots had been more advanced it would have been possible for those companies to maintain the lowest cost of production in the United States.  

“ 'They got all packed up and shipped to China,' Mr. Dulchinos said. 'And so you fast-forward to today and there are over a billion cellphones produced a year and not a single one is produced in the United States.'

"Yet, in the face of growing anxiety about the effects of automation on the economy, there were a number of bright spots. The industry is now generating $25 billion in annual revenue. The federation expects 1.6 million robots to be produced each year by 2015."

View Map + Bookmark Entry

Nearly 50% of U.S. Mobile Subscribers Own Smartphones March 29, 2012

According to a Nielsen report accessed on March 29, 2012, 49.7 percent of mobile subscribers owned smartphones as of February, 2012, an increase from 36 percent a year ago. Two-thirds of those who got a new phone in the last three months chose a smartphone over a feature phone.  Android-based phones led the U.S. smartphone market with a 48 percent share,  Apple's iPhone had 32 percent, and BlackBerry had 11.6 percent.

Source:

http://www.technolog.msnbc.msn.com/technology/technolog/half-us-cellular-subscribers-own-smartphones-nielsen-586757, accessed 03-29-2012.

View Map + Bookmark Entry

Harvard & M.I.T. to Offer Free Online Courses May 2, 2012

On May 2, 2012 Harvard and the Massachusetts Institute of Technology announced a new nonprofit partnership, known as edX, to offer free online courses from both universities.

"Harvard’s involvement follows M.I.T.’s announcement in December that it was starting an open online learning project, MITx. Its first course, Circuits and Electronics, began in March, enrolling about 120,000 students, some 10,000 of whom made it through the recent midterm exam. Those who complete the course will get a certificate of mastery and a grade, but no official credit. Similarly, edX courses will offer a certificate but not credit.

"But Harvard and M.I.T. have a rival — they are not the only elite universities planning to offer free massively open online courses, or MOOCs, as they are known. This month, Stanford, Princeton, the University of Pennsylvania and the University of Michigan announced their partnership with a new commercial company, Coursera, with $16 million in venture capital.

"Meanwhile, Sebastian Thrun, the Stanford professor who made headlines last fall when 160,000 students signed up for his Artificial Intelligence course, has attracted more than 200,000 students to the six courses offered at his new company, Udacity.

"The technology for online education, with video lesson segments, embedded quizzes, immediate feedback and student-paced learning, is evolving so quickly that those in the new ventures say the offerings are still experimental.

“ 'My guess is that what we end up doing five years from now will look very different from what we do now,' said Provost Alan M. Garber of Harvard, who will be in charge of the university’s involvement" (http://www.nytimes.com/2012/05/03/education/harvard-and-mit-team-up-to-offer-free-online-courses.html?_r=1, accessed 05-04-2012).

View Map + Bookmark Entry

Online Advertising is Expected to Surpass Print Advertising, But TV Advertising Dwarfs Both October 2012

According to the October 2012 IAB Internet advertising revenue report by the Internet Advertising Bureau, a New York based international organization founded in 1996:

"In the first half of the year, U.S. Internet sites collected $17 billion in ad revenue, a 14 percent increase over the same period of 2011. . . . In the second half of last year, websites had $16.8 billion in ad revenue. So even if growth were to slow in the second half, digital media this year could exceed the $35.8 billion that U.S. print magazines and newspapers garnered in ad revenue in 2011.

"In fact, the digital marketing research firm eMarketer projects 2012 Internet ad spending in excess of $37 billion, while print advertising spending is projected to fall to $34.3 billion.

"Meanwhile, television ad spending—which Nielsen reports was nearly $75 billion in 2011—continues to dwarf both" (http://www.technologyreview.com/news/429638/online-advertising-poised-to-finally-surpass/?utm_campaign=newsletters&utm_source=newsletter-daily-all&utm_medium=email&utm_content=20121017, accessed 10-22-2012).

View Map + Bookmark Entry

2.5 Quintillion Bytes of Data Each Day October 23, 2012

"Today the data we have available to make predictions has grown almost unimaginably large: it represents 2.5 quintillion bytes of data each day, Mr. Silver tells us, enough zeros and ones to fill a billion books of 10 million pages each. Our ability to tease the signal from the noise has not grown nearly as fast. As a result, we have plenty of data but lack the ability to extract truth from it and to build models that accurately predict the future that data portends" ("Mining Truth From Data Babel. Nate Silver’s ‘Signal and the Noise’ Examines Predictions"  By Leonard Mlodinow, NYTimes.com 10-23-2012).

View Map + Bookmark Entry

Windows 8, With Touch Screen Features, is Released October 26, 2012

On October 26, 2012 Microsoft released the Windows 8 operating system to the general public. Development of Windows 8 started in 2009 before the release of its predecessor, Windows 7, the last iteration of Windows designed primarily for desktop computers. Windows 8 introduced very significant changes primarily focused toward mobile devices, tablets and cell phones which use touch screens, and:

"to rival other mobile operating systems like Android and iOS, taking advantage of new or emerging technologies like USB 3.0, UEFI firmware, near field communications, cloud computing and the low-power ARM architecture, new security features such as malware filtering, built-in antivirus capabilities, a new installation process optimized for digital distribution, and support for secure boot (a UEFI feature which allows operating systems to be digitally signed to prevent malware from altering the boot process), the ability to synchronize certain apps and settings between multiple devices, along with other changes and performance improvements. Windows 8 also introduces a new shell and user interface based on Microsoft's "Metro" design language, featuring a new Start screen with a grid of dynamically updating tiles to represent applications, a new app platform with an emphasis on touchscreen input, and the new Windows Store to obtain and/or purchase applications to run on the operating system" (Wikipedia article on Windows 8, accessed 12-14-2012).

On December 13, 2012 MIT's technologyreview.com published an interview with Julie Larson-Green, head of product development at Microsoft, in which Larson-Green explained why Microsoft decided it was necessary to rethink and redesign in a relatively radical manner the operating system used by 1.2 billion people:

Why was it necessary to make such broad changes in Windows 8?

"When Windows was first created 25 years ago, the assumptions about the world and what computing could do and how people were going to use it were completely different. It was at a desk, with a monitor. Before Windows 8 the goal was to launch into a window, and then you put that window away and you got another one. But with Windows 8, all the different things that you might want to do are there at a glance with the Live Tiles. Instead of having to find many little rocks to look underneath, you see a kind of dashboard of everything that’s going on and everything you care about all at once. It puts you closer to what you’re trying to get done. 

Windows 8 is clearly designed with touch in mind, and many new Windows 8 PCs have touch screens. Why is touch so important? 

"It’s a very natural way to interact. If you get a laptop with a touch screen, your brain clicks in and you just start touching what makes it faster for you. You’ll use the mouse and keyboard, but even on the regular desktop you’ll find yourself reaching up doing the things that are faster than moving the mouse and moving the mouse around. It’s not like using the mouse, which is more like puppeteering than direct manipulation. 

In the future, are all PCs going to have touch screens? 

"For cost considerations there might always be some computers without touch, but I believe that the vast majority will. We’re seeing that the computers with touch are the fastest-selling right now. I can’t imagine a computer without touch anymore. Once you’ve experienced it, it’s really hard to go back.

Did you take that approach in Windows 8 as a response to the popularity of mobile devices running iOS and Android? 

"We started planning Windows 8 in June of 2009, before we shipped Windows 7, and the iPad was only a rumor at that point. I only saw the iPad after we had this design ready to go. We were excited. A lot of things they were doing about mobile and touch were similar to what we’d been thinking. We [also] had differences. We wanted not just static icons on the desktop but Live Tiles to be a dashboard for your life; we wanted you to be able to do things in context and share across apps; we believed that multitasking is important and that people can do two things at one time. 

Can touch coexist with a keyboard and mouse interface? Some people have said it doesn’t feel right to have both the newer, touch-centric elements and the old-style desktop in Windows 8. /

"It was a very definite choice to have both environments. A finger’s never going to replace the precision of a mouse. It’s always going to be easier to type on a keyboard than it is on glass. We didn’t want you to have to make a choice. Some people have said that it’s jarring, but over time we don’t hear that. It’s just getting used to something that’s different. Nothing was homogenous to start with, when you were in the browser it looked different than when you were in Excel."

View Map + Bookmark Entry

$2.6 Billion Spent on Ads on Phones and Tablets in 2012 October 29, 2012

In a New York Times article published on October 29, 2012 Claire Cain Miller estimated that advertisers would spend $2.6 billion on ads on phones and tablets in 2012— less than 2 percent of the amount they would spend over all, but more than triple what they spent in 2010.

"Google earns 56 percent of all mobile ad dollars and 96 percent of mobile search ad dollars, according to eMarketer. The company said it is on track to earn $8 billion in the coming year from mobile sales, which includes ads as well as apps, music and movies it sells in its Google Play store. But the vast majority of that money comes from ads, it said."

View Map + Bookmark Entry

Coursera Enrolls Nearly Two Million Students from 196 Countries in Online Courses within its First Year November 20, 2012

On November 20, 2012 the online educational technology company Coursera, founded in Mountain View, California, by computer science professors Andrew Ng and Daphne Koller of Stanford University in April 2012, had enrolled about 1,900,000 students from at least 196 countries in at least one course. At this time Coursera was partnering with 33 universities in the United States and around the world to distribute courses over the Internet.

View Map + Bookmark Entry

eBook Reading Jumps; Print Book Reading Declines December 17, 2012

"The population of e-book readers is growing. In the past year, the number of those who read e-books increased from 16% of all Americans ages 16 and older to 23%. At the same time, the number of those who read printed books in the previous 12 months fell from 72% of the population ages 16 and older to 67%.  

"Overall, the number of book readers in late 2012 was 75% of the population ages 16 and older, a small and statistically insignificant decline from 78% in late 2011.  

"The move toward e-book reading coincides with an increase in ownership of electronic book reading devices. In all, the number of owners of either a tablet computer or e-book reading device such as a Kindle or Nook grew from 18% in late 2011 to 33% in late 2012. As of November 2012, some 25% of Americans ages 16 and older own tablet computers such as iPads or Kindle Fires, up from 10% who owned tablets in late 2011. And in late 2012 19% of Americans ages 16 and older own e-book reading devices such as Kindles and Nooks, compared with 10% who owned such devices at the same time last year" (Pew Internet and American Life Project, 12-27-2012).

View Map + Bookmark Entry

"Libraries Have Shifted from Warehouses of Books & Materials to Become Participatory Sites of Culture and Learning" December 28, 2012

"Contemporary libraries have shifted from warehouses of books and materials to become participatory sites of culture and learning that invite, ignite and sustain conversations.

"The media scholar Henry Jenkins has identified that such participatory sites of culture share five traits:  

"· Creating learning spaces through multiple participatory media;

"· Providing opportunities for creating and sharing original works and ideas;  

"· Crafting an environment in which novices’ and experts’ roles are fluid as people learn together;  

"· Positing the library as a place where members feel a sense of belonging, value and connectedness; and  

"· Helping people believe their contributions matter by incorporating their ideas and feedback.  

"Modern libraries of all kinds – public, school, academic and special – are using this lens of participatory culture to help their communities rethink the idea of a “library.” By putting relationships with people first, libraries can recast and expand the possibilities of what we can do for communities by embodying what Guy Kawasaki calls enchantment: trustworthiness, likability, and exceptional services and products.

"Libraries in various communities provide enchantment through traditional services, like story time, bookmobiles, classes and rich collections of books. However, libraries are also incorporating innovative new roles: librarians as instructional partners, libraries as “makerspaces,” libraries as centers of community publishing and digital learning labs.  

"While libraries face many challenges – budget cuts, an ever-shifting information landscape, stereotypes that sometimes hamper how people see libraries, and rapidly evolving technologies – our greatest resource is community participation. Relationships with the community build an organic library, that is of the people, by the people and for the people (Buffy J. Hamilton, http://www.nytimes.com/roomfordebate/2012/12/27/do-we-still-need-libraries/its-not-just-story-time-and-bookmobiles, accessed 12-29-2012). 

View Map + Bookmark Entry

"Information Technology Dividends Outpace All Others" January 11, 2013

"For what appears to be the first time ever, information technology companies in the Standard & Poor’s index of 500 stocks are paying more in dividends than companies in any other sector, S.&P. reported this week. Multimedia

"Off the Charts: High Tech, High Dividends S.&P. Dow Jones Indices reported that in 2012 the technology sector accounted for 14.7 percent of all dividends paid to investors in the 500 companies, up from 10.3 percent in 2011 and from a little over 5 percent back in 2004. It replaced the consumer staples sector, which had been the largest payer of dividends for the previous three years.  

"The change was largely because of the decision by Apple, now the most valuable company in the world, to begin paying dividends last year. The company had been public for more than three decades before it announced plans in March to begin making payouts. Four other technology companies in the index — all but one of which had been public for more than two decades without paying a dividend — later joined in making payments to shareholders.  

"With those changes, 60 percent — 42 — of the 70 technology stocks in the index are now dividend payers. The dividends from many technology companies are relatively small, however, and of the other sectors, only health care comes close to having as large a share of companies that do not pay dividends" (http://www.nytimes.com/2013/01/12/business/information-technology-dividends-surge-past-consumer-staples-sector.html, accessed 01-12-2013).

View Map + Bookmark Entry

The Pew Internet Report on Library Services in the Digital Age January 22, 2013

"Released: Janaury 22, 2013

"Patrons embrace new technologies – and would welcome more. But many still want printed books to hold their central place

"Summary of findings

"The internet has already had a major impact on how people find and access information, and now the rising popularity of e-books is helping transform Americans’ reading habits. In this changing landscape, public libraries are trying to adjust their services to these new realities while still serving the needs of patrons who rely on more traditional resources. In a new survey of Americans’ attitudes and expectations for public libraries, the Pew Research Center’s Internet & American Life Project finds that many library patrons are eager to see libraries’ digital services expand, yet also feel that print books remain important in the digital age.  

"The availability of free computers and internet access now rivals book lending and reference expertise as a vital service of libraries. In a national survey of Americans ages 16 and older:  

" • 80% of Americans say borrowing books is a “very important” service libraries provide.

" • 80% say reference librarians are a “very important” service of libraries.

" • 77% say free access to computers and the internet is a “very important” service of libraries.

"Moreover, a notable share of Americans say they would embrace even wider uses of technology at libraries such as:  

" • Online research services allowing patrons to pose questions and get answers from librarians: 37% of Americans ages 16 and older would “very likely” use an “ask a librarian” type of service, and another 36% say they would be “somewhat likely” to do so.

"• Apps-based access to library materials and programs: 35% of Americans ages 16 and older would “very likely” use that service and another 28% say they would be “somewhat likely” to do so.

" • Access to technology “petting zoos” to try out new devices: 35% of Americans ages 16 and older would “very likely” use that service and another 34% say they would be “somewhat likely” to do so.

" • GPS-navigation apps to help patrons locate material inside library buildings: 34% of Americans ages 16 and older would “very likely” use that service and another 28% say they would be “somewhat likely” to do so.

" • “Redbox”-style lending machines or kiosks located throughout the community where people can check out books, movies or music without having to go to the library itself: 33% of Americans ages 16 and older would “very likely” use that service and another 30% say they would be “somewhat likely” to do so.

" • “Amazon”-style customized book/audio/video recommendation schemes that are based on patrons’ prior library behavior: 29% of Americans ages 16 and older would “very likely” use that service and another 35% say they would be “somewhat likely” to do so." (http://libraries.pewinternet.org/2013/01/22/library-services/, accessed 03-04-2013).

View Map + Bookmark Entry

Google Introduces "Google Glass" Explorer Edition April 15, 2013

On April 15, 2013 Google introduced Google Glass, an optical head-mounted display (OHMD) wearable computer. The augmented reality device displays information in a smartphone-like hands-free format. Wearers communicate with the Internet via natural language voice commands. Google started selling Google Glass to qualified "Glass Explorers" in the US on April 15, 2013 for a limited period for $1,500, before it became available to the public on May 15, 2014 for the same price.

View Map + Bookmark Entry

On the Twentieth Anniversary CERN Restores the First Website April 30, 2013

On April 30, 1993 CERN, Geneva, Switzerland, published documents which released the World Wide Web software into the public domain.

"To mark the [twentieth] anniversary of the publication of the document that made web technology free for everyone to use, CERN is starting a project to restore the first website and to preserve the digital assets that are associated with the birth of the web. To learn more about the project and the first website, visit http://first-website.web.cern.ch"

"This project aims to preserve some of the digital assets that are associated with the birth of the web. For a start we would like to restore the first URL - put back the files that were there at their earliest possible iterations. Then we will look at the first web servers at CERN and see what assets from them we can preserve and share. We will also sift through documentation and try to restore machine names and IP addresses to their original state. Beyond this we want to make http://info.cern.ch - the first web address - a destination that reflects the story of the beginnings of the web for the benefit of future generations."

View Map + Bookmark Entry

The Size of the Digital Universe in 2013 and Prediction of its Growth Rate June 2013

"Because of smartphones, tablets, social media sites, e-mail and other forms of digital communications, the world creates 2.5 quintillion bytes of new data daily, according to I.B.M.

"The company estimates that 90 percent of the data that now exists in the world has been created in just the last two years. From now until 2020, the digital universe is expected to double every two years, according to a study by the International Data Corporation" (http://www.nytimes.com/2013/06/09/us/revelations-give-look-at-spy-agencys-wider-reach.html?hpw, accessed 06-08-2013).

View Map + Bookmark Entry

The First Master's Degree Offered through Massive Open Online Courses by a Major University August 17, 2013

On August 17, 2013 The New York Times reported that Georgia Tech, which operates one of the country’s top computer science programs, plans to offer in January 2014 a massive open online course (MOOC) master’s degree in computer science for $6,600 — far less than the $45,000 on-campus price.  

"Zvi Galil, the dean of the university’s College of Computing, expects that in the coming years, the program could attract up to 10,000 students annually, many from outside the United States and some who would not complete the full master’s degree. 'Online, there’s no visa problem,' he said.  

"The program rests on an unusual partnership forged by Dr. Galil and Sebastian Thrun, a founder of Udacity, a Silicon Valley provider of the open online courses.  

"Although it is just one degree at one university, the prospect of a prestigious low-cost degree program has generated great interest. Some educators think the leap from individual noncredit courses to full degree programs could signal the next phase in the evolution of MOOCs — and bring real change to higher education."

"From their start two years ago, when a free artificial intelligence course from Stanford enrolled 170,000 students, free massive open online courses, or MOOCs, have drawn millions and yielded results like the perfect scores of Battushig, a 15-year-old Mongolian boy, in a tough electronics course offered by the Massachusetts Institute of Technology" (http://www.nytimes.com/2013/08/18/education/masters-degree-is-new-frontier-of-study-online.html?hp, accessed 08-18-2013).

View Map + Bookmark Entry

Teaching Keyboard Skills in Kindergarten October 2013

By October 2013, in the forty-five states, the District of Columbia, and four territories of the United States that adopted the Common Core State Standards Initiative children as early as kindergarten were learning to use a keyboard—a radical change in the traditional order of teaching handwriting long before typing.      

"A skill that has been taught for generations in middle or high school — first on manual typewriters, then electric word processors and finally on computer keyboards — is now becoming a staple of elementary schools. Educators around the country are rushing to teach typing to children who have barely mastered printing by hand.

"The Common Core standards make frequent references to technology skills, stating that students in every grade should be able to use the Internet for research and use digital tools in their schoolwork to incorporate video, sound and images with writing.

"But the standardized tests linked to the Common Core make those expectations crystal clear because the exams — which will be given in 2014-2015 — require students to be able to manipulate a mouse; click, drag and type answers on a keyboard; and, starting in third grade, write online. Fourteen states have agreed to field-test the exams in the spring to help those creating the tests iron out the wrinkles and make improvements" (http://wapo.st/1ci9YSR, accessed 10-14-2013).

View Map + Bookmark Entry

Zero to Eight: Children's Media Use in America 2013 October 28, 2013

On October 28, 2013 Common Sense Media of San Francisco issued their two year follow-up to their study of October 2011Zero to Eight: Children's Media Use in America 2013 by Vicky Rideout. Key findings in the 2013 report were:

"Children’s access to mobile media devices is dramatically higher than it was two years ago.

"Among families with children age 8 and under, there has been a five-fold increase in ownership of tablet devices such as iPads, from 8% of all families in 2011 to 40% in 2013. The percent of children with access to some type of 'smart' mobile device at home (e.g., smartphone, tablet) has jumped from half (52%) to three-quarters (75%) of all children in just two years.

"Almost twice as many children have used mobile media compared to two years ago, and the average amount of time children spend using mobile devices has tripled.

"Seventy-two percent of children age 8 and under have used a mobile device for some type of media activity such as playing games, watching videos, or using apps, up from 38% in 2011. In fact, today, 38% of children under 2 have used a mobile device for media (compared to 10% two years ago). The percent of children who use mobile devices on a daily basis – at least once a day or more – has more than doubled, from 8% to 17%. The amount of time spent using these devices in a typical day has tripled, from an average of :05 a day among all children in 2011 up to :15 a day in 2013. [Throughout the report, times are presented in hours:minutes format. For example, “1:46” indicates one hour and 46 minutes.] The difference in the average time spent with mobile devices is due to two factors: expanded access, and the fact that those who use them do so for longer periods of time. Among those who use a mobile device in a typical day, the average went from :43 in 2011 to 1:07 in 2013."

View Map + Bookmark Entry

The Growing Economic and Social Impact of Artificial Intelligence December 29, 2013

On December 29, 2013 The New York Times published an article by Michael Fitzpatrick on Japan's Todai Robot Project entitled "Computers Jump to the Head of the Class." This was the first article that I ever read that spelled out the potential dystopian impact of advances in artificial intelligence on traditional employment and also on education. Because the article was relatively brief I decided to quote it in full:

"TOKYO — If a computer could ace the entrance exam for a top university, what would that mean for mere mortals with average intellects? This is a question that has bothered Noriko Arai, a mathematics professor, ever since the notion entered her head three years ago.

“I wanted to get a clear image of how many of our intellectual activities will be replaced by machines. That is why I started the project: Can a Computer Enter Tokyo University? — the Todai Robot Project,” she said in a recent interview.

Tokyo University, known as Todai, is Japan’s best. Its exacting entry test requires years of cramming to pass and can defeat even the most erudite. Most current computers, trained in data crunching, fail to understand its natural language tasks altogether.

Ms. Arai has set researchers at Japan’s National Institute of Informatics, where she works, the task of developing a machine that can jump the lofty Todai bar by 2021.

If they succeed, she said, such a machine should be capable, with appropriate programming, of doing many — perhaps most — jobs now done by university graduates.

With the development of artificial intelligence, computers are starting to crack human skills like information summarization and language processing.

Given the exponential growth of computing power and advances in artificial intelligence, or A.I., programs, the Todai robot’s task, though daunting, is feasible, Ms. Arai says. So far her protégé, a desktop computer named Todai-kun, is excelling in math and history but needs more effort in reading comprehension.

There is a significant danger, Ms. Arai says, that the widespread adoption of artificial intelligence, if not well managed, could lead to a radical restructuring of economic activity and the job market, outpacing the ability of social and education systems to adjust.

Intelligent machines could be used to replace expensive human resources, potentially undermining the economic value of much vocational education, Ms. Arai said.

“Educational investment will not be attractive to those without unique skills,” she said. Graduates, she noted, need to earn a return on their investment in training: “But instead they will lose jobs, replaced by information simulation. They will stay uneducated.”

In such a scenario, high-salary jobs would remain for those equipped with problem-solving skills, she predicted. But many common tasks now done by college graduates might vanish.

“We do not know in which areas human beings outperform machines. That means we cannot prepare for the changes,” she said. “Even during the industrial revolution change was a lot slower.”

Over the next 10 to 20 years, “10 percent to 20 percent pushed out of work by A.I. will be a catastrophe,” she says. “I can’t begin to think what 50 percent would mean — way beyond a catastrophe and such numbers can’t be ruled out if A.I. performs well in the future.”

She is not alone in such an assessment. A recent study published by the Program on the Impacts of Future Technology, at Oxford University’s Oxford Martin School, predicted that nearly half of all jobs in the United States could be replaced by computers over the next two decades.

Some researchers disagree. Kazumasa Oguro, professor of economics at Hosei University in Tokyo, argues that smart machines should increase employment. “Most economists believe in the principle of comparative advantage,” he said. “Smart machines would help create 20 percent new white-collar jobs because they expand the economy. That’s comparative advantage.”

Others are less sanguine. Noriyuki Yanagawa, professor of economics at Tokyo University, says that Japan, with its large service sector, is particularly vulnerable.

“A.I. will change the labor demand drastically and quickly,” he said. “For many workers, adjusting to the drastic change will be extremely difficult.”

Smart machines will give companies “the opportunity to automate many tasks, redesign jobs, and do things never before possible even with the best human work forces,” according to a report this year by the business consulting firm McKinsey.

Advances in speech recognition, translation and pattern recognition threaten employment in the service sectors — call centers, marketing and sales — precisely the sectors that provide most jobs in developed economies. As if to confirm this shift from manpower to silicon power, corporate investment in the United States in equipment and software has never been higher, according to Andrew McAfee, the co-author of “Race Against the Machine” — a cautionary tale for the digitized economy.

Yet according to the technology market research firm Gartner, top business executives worldwide have not grasped the speed of digital change or its potential impact on the workplace. Gartner’s 2013 chief executive survey, published in April, found that 60 percent of executives surveyed dismissed as “‘futurist fantasy” the possibility that smart machines could displace many white-collar employees within 15 years.

“Most business and thought leaders underestimate the potential of smart machines to take over millions of middle-class jobs in the coming decades,” Kenneth Brant, research director at Gartner, told a conference in October: “Job destruction will happen at a faster pace, with machine-driven job elimination overwhelming the market’s ability to create valuable new ones.”

Optimists say this could lead to the ultimate elimination of work — an “Athens without the slaves” — and a possible boom for less vocational-style education. Mr. Brant’s hope is that such disruption might lead to a system where individuals are paid a citizen stipend and be free for education and self-realization.

“This optimistic scenario I call Homo Ludens, or ‘Man, the Player,’ because maybe we will not be the smartest thing on the planet after all,” he said. “Maybe our destiny is to create the smartest thing on the planet and use it to follow a course of self-actualization.”

View Map + Bookmark Entry

"The Web at 25 in the U.S." by the Pew Research Internet Project February 27, 2014

Coinciding with the 25th anniversary of Tim Berners-Lee's initial conception of the World Wide Web in March 1989, on February 27, 2014 the Pew Research Internet Project of Washington, D.C. released its report on the 25th anniversary of the World Wide Web: The Web at 25 in the U.S. 

Here are some of the general conclusions drawn in the report:

90 percent of Americans think that the Internet has been a good thing for them personally.

$75,000 is the Income level were Internet usage almost becomes ubiquitous. A full 99 percent of Americans who report this level of household income are on the Web.

28 percent of landline telephone owners would find it “very hard” to give up their phones. That is a big drop from 2006, when 48 percent of landline owners struggled with the idea of giving up their landline phones.

11 percent represents the gap between those who would find it “very hard” to give up the Internet (46 percent) and television (35 percent).

58 percent of Americans own a smartphone.

3-to-1: The ratio of Internet users who think that social media strengthens their relationships versus those who think it weakens them.

76 percent of Internet users say the people they witness or encounter online are “mostly kind” to each other.

View Map + Bookmark Entry

The GDELT Project: The Largest Open-Access Database on Worldwide News Media May 29, 2014

On May 29, 2014 Kalev H. Leetaru announced in the Google Cloud Platform Blog that the entire quarter-billion-record GDELT Event Database (Global Data on Events, Location and Tone) was available as a public dataset in Google BigQuery. The database contained records beginning in 1979. It monitored worldwide news media in over 100 languages.

He wrote:

"BigQuery is Google’s powerful cloud-based analytical database service, designed for the largest datasets on the planet. It allows users to run fast, SQL-like queries against multi-terabyte datasets in seconds. Scalable and easy to use, BigQuery gives you real-time insights about your data. With the availability of GDELT in BigQuery, you can now access realtime insights about global human society and the planet itself!

"You can take it for a spin here. (If it's your first time, you'll have to sign-up to create a Google project, but no credit card or commitment is needed).

"The GDELT Project pushes the boundaries of “big data,” weighing in at over a quarter-billion rows with 59 fields for each record, spanning the geography of the entire planet, and covering a time horizon of more than 35 years. The GDELT Project is the largest open-access database on human society in existence. Its archives contain nearly 400M latitude/longitude geographic coordinates spanning over 12,900 days, making it one of the largest open-access spatio-temporal datasets as well.

"From the very beginning, one of the greatest challenges in working with GDELT has been in how to interact with a dataset of this magnitude. Few traditional relational database servers offer realtime querying or analytics on data of this complexity, and even simple queries would normally require enormous attention to data access patterns and intricate multi-column indexing to make them possible. Traditional database servers require the creation of indexes over the most-accessed columns to speed queries, meaning one has to anticipate apriori how users are going to interact with a dataset. 

"One of the things we’ve learned from working with GDELT users is just how differently each of you needs to query and analyze GDELT. The sheer variety of access patterns and the number of permutations of fields that are collected together into queries makes the traditional model of creating a small set of indexes impossible. One of the most exciting aspects of having GDELT available in BigQuery is that it doesn’t have the concept of creating explicit indexes over specific columns – instead you can bring together any ad-hoc combination of columns and query complexity and it still returns in just a few seconds. This means that no matter how you access GDELT, what columns you look across, what kinds of operators you use, or the complexity of your query, you will still see results pretty much in near-realtime. 

"For us, the most groundbreaking part of having GDELT in BigQuery is that it opens the door not only to fast complex querying and extracting of data, but also allows for the first time real-world analyses to be run entirely in the database. Imagine computing the most significant conflict interaction in the world by month over the past 35 years, or performing cross-tabbed correlation over different classes of relationships between a set of countries. Such queries can be run entirely inside of BigQuery and return in just a handful of seconds. This enables you to try out “what if” hypotheses on global-scale trends in near-real time.

"On the technical side, BigQuery is completely turnkey: you just hand it your data and start querying that data – that’s all there is to it. While you could spin up a whole cluster of virtual machines somewhere in the cloud to run your own distributed clustered database service, you would end up spending a good deal of your time being a systems administrator to keep the cluster working and it wouldn’t support BigQuery’s unique capabilities. BigQuery eliminates all of this so all you have to do is focus on using your data, not spending your days running computer servers. 

"We automatically update the public dataset copy of GDELT in BigQuery every morning by 5AM ET, so you don’t even have to worry about updates – the BigQuery copy always has the latest global events. In a few weeks when GDELT unveils its move from daily updates to updating every 15 minutes, we’ll be taking advantage of BigQuery’s new stream updating capability to ensure the data reflects the state of the world moment-by-moment.

"Check out the GDELT blog for future posts where we will showcase how to harness some of BigQuery’s power to perform some pretty incredible analyses, all of them running entirely in the database system itself. For example, we’re particularly excited about the ability to use features like BigQuery’s new Pearson correlation support to be able to search for patterns across the entire quarter-billion-record dataset in just seconds. And we can’t wait to see what you do with it. . . ." 

Regarding GDELT, in April 2013 Leetaru and co-developer of the project, Philip A. Schrodt, presented an illustrated paper at the International Studies Association meetings held in San Francisco: GDELT: Global Data on Events, Location and Tone, 1979-2012.

View Map + Bookmark Entry

Sotheby's Officially Teams with eBay for Online Auctions July 14, 2014

On July 14, 2014 The New York Times published an article entitled, "A Warhol With Your Moosehead? Sotheby's Teams with eBay" by Carol Vogel and Mike Isaac, from which I quote:

"Convinced that consumers are finally ready to shop online for Picassos and choice Persian rugs in addition to car parts and Pez dispensers, Sotheby’s, the blue-chip auction house, and eBay, the Internet shopping giant, plan to announce Monday that they have formed a partnership to stream Sotheby’s sales worldwide.

"Starting this fall, most of Sotheby’s New York auctions will be broadcast live on a new section of eBay’s website. Eventually the auction house expects to extend the partnership, adding online-only sales and streamed auctions taking place anywhere from Hong Kong to Paris to London. The pairing would upend the rarefied world of art and antiques, giving eBay’s 145 million customers instant bidding access to a vast array of what Sotheby’s sells, from fine wines to watercolors by Cézanne.

"This isn’t the first time the two companies have teamed up; a 2002 collaboration fizzled after only a year. But officials say the market has matured in recent years, making the moment right for a new collaboration.

"The announcement comes just months after the activist shareholder Daniel S. Loeb criticized Sotheby’s for its antiquated business practices, likening the company to “an old painting in desperate need of restoration” and calling for directors there to beef up its online sales strategy. It also signals a new phase in Sotheby’s age-old rivalry with Christie’s. After years of running neck and neck, Sotheby’s has recently been losing business to its main competitor — and Christie’s is planning its own bold move to capture more online business, a $50 million investment that will include more Internet-only auctions and a redesigning of its website scheduled for October.

"Online auctions are not new to either auction house. Registered bidders can compete in certain sales in real time with the click of a mouse. What is new is the way Sotheby’s is trying to reach beyond its traditional customers to an enormous affluent global audience for whom online buying has become second nature. Luxury shopping websites like Gilt and 1st Dibs, with their broad mix of décor, designer fashion and antiques, have shown that shoppers are willing to spend many thousands of dollars on everything from handbags to sconces without inspecting them in person. And while the auction houses are seeing their online bidding grow — Sotheby’s, for example, says its sales on its website increased 36 percent in 2013 over the previous year — they believe the full potential of online sales has yet to be tapped."

View Map + Bookmark Entry

Facebook's "News Feed" Drives Up to 20% of Traffic to News Sites October 26, 2014

On October 26, 2014 Ravi Somaiya published an article in The New York Times entitled "How Facebook is Changing the Way its Users Consume Journalism."  A caption to an image in the article stated that "30 percent of adults in America get news from the social network." From the article I quote the first third:

"MENLO PARK, Calif. — Many of the people who read this article will do so because Greg Marra, 26, a Facebook engineer, calculated that it was the kind of thing they might enjoy.

"Mr. Marra’s team designs the code that drives Facebook’s News Feed — the stream of updates, photographs, videos and stories that users see. He is also fast becoming one of the most influential people in the news business.

"Facebook now has a fifth of the world — about 1.3 billion people — logging on at least monthly. It drives up to 20 percent of traffic to news sites, according to figures from the analytics company SimpleReach. On mobile devices, the fastest-growing source of readers, the percentage is even higher, SimpleReach says, and continues to increase.

View Map + Bookmark Entry

Three Breakthroughs that Finally Unleased AI on the World October 27, 2014

In "The Three Breakthroughs That Have Finally Unleased AI on the World", Wired Magazine, October 27, 2014, writer Kevin Kelly of Pacifica, California explained how breakthroughs in cheap parallel computation, big data, and better algorithms were enabling new AI-based services that were previously the domain of sci-fi and academic white papers. Within the near future AI would play greater and greater roles in aspects of everyday life, in products like Watson developed by IBM, and products from Google, Facebook and other companies. More significant than these observations were Kelly's views about the impact that these developments would have on our lives and how we may understand the difference between machine and human intelligence:

"If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers. Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists.

"In fact, this won't really be intelligence, at least not as we've come to think of it. Indeed, intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them—and our most premium AI services will likely be advertised as consciousness-free.

"What we want instead of intelligence is artificial smartness. Unlike general intelligence, smartness is focused, measurable, specific. It also can think in ways completely different from human cognition. A cute example of this nonhuman thinking is a cool stunt that was performed at the South by Southwest festival in Austin, Texas, in March of this year. IBM researchers overlaid Watson with a culinary database comprising online recipes, USDA nutritional facts, and flavor research on what makes compounds taste pleasant. From this pile of data, Watson dreamed up novel dishes based on flavor profiles and patterns from existing dishes, and willing human chefs cooked them. One crowd favorite generated from Watson's mind was a tasty version of fish and chips using ceviche and fried plantains. For lunch at the IBM labs in Yorktown Heights I slurped down that one and another tasty Watson invention: Swiss/Thai asparagus quiche. Not bad! It's unlikely that either one would ever have occurred to humans.

"Nonhuman intelligence is not a bug, it's a feature. The chief virtue of AIs will be their alien intelligence. An AI will think about food differently than any chef, allowing us to think about food differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science and art. The alienness of artificial intelligence will become more valuable to us than its speed or power. . . .

View Map + Bookmark Entry

The First "Professional" Film Festival Film Shot on an iPhone January 2015

"So how do you make a Sundance movie for iPhone? You need four things. First, of course, the iPhone (Baker and his team used three). Second, an $8 app called Filmic Pro that allowed the filmmakers fine-grained control over the focus, aperture, and color temperature. Third, a Steadicam. 'These phones, because they’re so light, and they’re so small, a human hand — no matter how stable you are — it will shake. And it won’t look good,' says Baker. 'So you needed the Steadicam rig to stabilize it.'

"The final ingredient was a set of anamorphic adapter lenses that attach to the iPhone. The lenses were prototypes from Moondog Labs, and Baker said they were essential to making Tangerine look like it belonged on a big screen. 'To tell you the truth, I wouldn’t have even made the movie without it,' Baker says. 'It truly elevated it to a cinematic level.'

"Like any conventional film,Tangerine underwent post-production. 'With a lot of these social realist films, the first thing you do is drain the color,' Baker says." 

View Map + Bookmark Entry

The Role of Technology in the Increased Violence Against Journalists January 9, 2015

On January 9, 2015 The Los Angeles Times published an op-ed piece by Joel Simon, executive director of the Committee to Protect Journalists, entitled "Technology's role in the increased violence against journalists." This I quote in full. As often, the links are my additions:

"The murderous attack on the office of the satirical weekly Charlie Hebdo in Paris last week can be seen in the context of modern French society: its challenges assimilating immigrants, its ongoing efforts to preserve its liberal and secular political culture, and even its national affinity for a kind of scathing and irreverent cartooning rooted in a deep distrust of institutions.

"But the attack has a global dimension as it also can be seen as the latest skirmish in a war over freedom of expression. This war has led to a record number of journalists being killed and imprisoned around the world. The last three years have been the most deadly and dangerous ever documented by the Committee to Protect Journalists, which has been keeping detailed data since 1992.

"The advent of the Internet has completely transformed the way news is gathered and disseminated to the global audience. This new system has tremendous positive advantages, allowing news to flow more easily across borders and making it more difficult for repressive governments to censor and control it. But there are also profound implications for the safety of journalists on the front lines of these information battles.

"One way to think about the change is to consider that not that long ago journalists venturing into conflict zones often chose to identify themselves, painting the word “press” on their cars or flak jackets. Journalists were safer because they collectively exercised an information monopoly and this made them useful to the warring parties who needed the media to communicate with the world.

"But today many violent groups, such as the Islamic State militants in Syria and the drug cartels in Mexico, rely on the Internet and social media to achieve the same ends and, of course, they are better able to control the message. Journalists are seen as dispensable — more useful as hostages or props in elaborately staged execution videos. In this context, identifying yourself as a journalist makes you a target.

"Moreover, because of new enabling technology and cutbacks in the news industry, a growing portion of frontline news gathering today is accomplished by local journalists and freelancers, who inform their own countries and the world. These journalists are more vulnerable because they often work without institutional support.

"In fact, the vast majority of journalists killed around the world do not die covering combat. They are deliberately targeted in their own countries because of the stories they cover or the ideas they express. In this context, the attack on Charlie Hebdo is typical of the risk that journalists face everywhere. What made it shocking was that it took place not in Mexico or Pakistan, but in France.

"Technology has also changed the global media environment by opening every corner of the world to myriad ideas and information. This too has its consequences.

"In 1948, when the Universal Declaration of Human Rights was adopted by the United Nations, it guaranteed the right to seek and receive information “regardless of frontiers.” That phrase — regardless of frontiers — is unique in international human rights law because it makes freedom of expression explicitly transnational. When the language was ratified, the concept was purely notional. Today, the Internet has made it real.

"In other words, the Internet has brought liberal, Western ideas of freedom of expression into direct conflict with 19th century notions of sovereignty and more traditional societies that place enormous value on personal honor and the sanctity of religious symbols.

"For example, the Chinese leadership, while embracing connectivity for its citizens, views the Internet as a Trojan horse that can be used to channel dangerous ideas from outside the country, ideas that erode the power of the Communist Party. Turkey's President Recep Tayyip Erdogan recently told a delegation from the Committee to Protect Journalists that he “is increasingly against the Internet every day.” Russia also has been cracking down on online speech. All these governments distrust the Internet and are seeking to exercise greater control over electronic communication within their borders.

"Leaders of some Muslim countries have a different but related argument. They are deeply concerned by images they deem to be blasphemous or shocking to religious sensibilities, and which are being imposed on them by a global information system that serves the interests of Western governments and international technology companies.

"The attack on Charlie Hebdo responds to this dynamic. While the magazine has sought to shock and offend in a French context, its cartoons traveled around the world, angering religious Muslims in many more conservative societies and providing a rallying cry for Al Qaeda, which put the paper's editors at the top of its hit list.

"One can acknowledge the anger and upset of those who see their fundamental religious beliefs mocked while also affirming that we must redouble our efforts to defend freedom of expression around the world. Freedom of expression is not only a fundamental human right; in the Internet era, information is a shared global resource that must be available equally to all.

"A global battle for freedom of expression is upon us, and the casualties are mounting. The attack on the journalists of Charlie Hebdo shows us there is no safe haven."

View Map + Bookmark Entry