4406 entries. 94 themes. Last updated December 26, 2016.

# Imaging / Photography / Computer Vision Timeline

## Foundation of Experimental Physics, Optics, and the Science of Vision 1011 – 1021

Under house arrest in Cairo, Egypt, between 1011 and 1021, Iraqi Muslim scientist Ibn al-Haytham (Latinized as Alhacen or Alhazen) wrote The Book of Optics (Arabic: Kitab al-Manazir‎; Latin: De aspectibus or Opticae Thesaurus: Alhazeni Arabis,)  a seven-volume treatise on optics, physics, mathematics, anatomy and psychology.

"The book had an important influence on the development of optics, as it laid the foundations for modern physical optics after drastically transforming the way in which light and vision had been understood, and on science in general with its introduction of the experimental scientific method. Ibn al-Haytham has been called the "father of modern optics", the 'pioneer of the modern scientific method,' and the founder of experimental physics, and for these reasons he has been described as the 'first scientist.'

"The Book of Optics has been ranked alongside Isaac Newton's Philosophiae Naturalis Principia Mathematica as one of the most influential books in the history of physics, as it is widely considered to have initiated a revolution in the fields of optics and visual perception. It established experimentation as the norm of proof in optics, and gave optics a physico-mathematical conception at a much earlier date than the other mathematical disciplines of astronomy and mechanics.

"The Book of Optics also contains the earliest discussions and descriptions of the psychology of visual perception and optical illusions, as well as experimental psychology, and the first accurate descriptions of the camera obscura, a precursor to the modern camera. In medicine and ophthalmology, the book also made important advances in eye surgery, as it correctly explained the process of sight for the first time" (Wikipedia article on Book of Optics, accessed 04-23-2009).

Translated into Latin by an unknown scholar at the end of the 12th century or the beginning of the 13th, Alhazen's Book of Optics enjoyed great reputation and circulated by manuscript copying to the few who could understand it during the Middle Ages. It was first edited for print publication by the German mathematician Friedrich Risner and issued  as Opticae thesaurus. . . libri septem, nunc primum editi . . . item Vitellonis Thuringopoloni libri X in Basel by Episcopus in 1572.

Hook & Norman, The Haskell F. Norman Library of Science and Medicine (1991) no. 1027.

## Construction of the First Camera Obscura 1012 – 1021

,

Abū ʿAlī al-Ḥasan ibn al-Ḥasan ibn al-Haytham  (أبو علي، الحسن بن الحسن بن الهيثم‎), frequently referred to as Ibn al-Haytham (Arabic: ابن الهيثم, known in the west as Alhazen, built the first camera obscura or pinhole camera—significant in the history of optics, photography, and the history of art.

In his Book of Optics, written in Cairo between 1012 and 1021, Ibn al-Haytham used the term “Al-Bayt al-Muthlim", translated into English as "dark room."

"In the experiment he undertook, in order to establish that light travels in time and with speed, he says: 'If the hole was covered with a curtain and the curtain was taken off, the light traveling from the hole to the opposite wall will consume time.' He reiterated the same experience when he established that light travels in straight lines. A revealing experiment introduced the camera obscura in studies of the half-moon shape of the sun's image during eclipses which he observed on the wall opposite a small hole made in the window shutters. In his famous essay 'On the form of the Eclipse' (Maqalah-fi-Surat-al-Kosuf) he commented on his observation 'The image of the sun at the time of the eclipse, unless it is total, demonstrates that when its light passes through a narrow, round hole and is cast on a plane opposite to the hole it takes on the form of a moon-sickle'.

"In his experiment of the sun light he extended his observation of the penetration of light through the pinhole to conclude that when the sun light reaches and penetrates the hole it makes a conic shape at the points meeting at the pinhole, forming later another conic shape reverse to the first one on the opposite wall in the dark room. This happens when sun light diverges from point “ﺍ” until it reaches an aperture and is projected through it onto a screen at the luminous spot. Since the distance between the aperture and the screen is insignificant in comparison to the distance between the aperture and the sun, the divergence of sunlight after going through the aperture should be insignificant. In other words, should be about equal to. However, it is observed to be much greater when the paths of the rays which form the extremities of are retraced in the reverse direction, it is found that they meet at a point outside the aperture and then diverge again toward the sun as illustrated in figure 1. This an early accurate description of the Camera Obscura phenomenon."

"In 13th-century England Roger Bacon described the use of a camera obscura for the safe observation of solar eclipses. Its potential as a drawing aid may have been familiar to artists by as early as the 15th century; Leonardo da Vinci (1452-1519 AD) described camera obscura in Codex Atlanticus. . . .

"The Dutch Masters, such as Johannes Vermeer, who were hired as painters in the 17th century, were known for their magnificent attention to detail. It has been widely speculated that they made use of such a camera, but the extent of their use by artists at this period remains a matter of considerable controversy, recently revived by the Hockney-Falco thesis. The term "camera obscura" was first used by the German astronomer Johannes Kepler in 1604.

"Early models were large; comprising either a whole darkened room or a tent (as employed by Johannes Kepler). By the 18th century, following developments by Robert Boyle and Robert Hooke, more easily portable models became available. These were extensively used by amateur artists while on their travels, but they were also employed by professionals, including Paul Sandby, Canaletto and Joshua Reynolds, whose camera (disguised as a book) is now in the Science Museum (London). Such cameras were later adapted by Louis Daguerre and William Fox Talbot for creating the first photographs" (Wikipedia article on Camera obscura, accessed 04-24-2009).

## The Codex Selden/ Codex Añute, a Precolonial Mexican Palimpsest Circa 1560

The Codex Selden, also called the Codex Añute, a Mixtec screenfold manuscript preserved in the Bodleian Library, Oxford, was acquired by the Bodleian in the 17th century from the estate of jurist, legal antiquary and orientalist John Selden. It is one of less than twenty precolonial Mesoamerican codices that survived the conquest of the Americas, containing information on the history of ancient cities, prescriptions on rituals and calendrical divination. Of those codices, the Codex Selden/Añute is the only palimpsest, as its currently viewable content was written on a white paint layer that covers an earlier pictographic document.

In 2013-2014 the Bodleian's Ancient Mexican Manuscripts project undertook the recovery of these hidden pictorial texts. Results were expected to be published in the summer of 2016:

"The use of exclusively organic paints to create these images presented a unique set of challenges necessitating the development of a new imaging technique. During the present intervention this new technique called Photothermal Tomography is combined with a number of other techniques such as high-resolution photography, infrared photography, and RTI imaging to gain a better insight into this important palimpsest"( http://www.bodleian.ox.ac.uk/whats-on/upcoming-events/2015/mar/precolonial-mexican-manuscript, accessed 03-18-2015).

In August 2016 the Oxford Mail reported the following:

" "After four or five years of trying different techniques, we’ve been able to reveal an abundance of images without damaging this extremely vulnerable item,' said Ludo Snijders from Leiden University, who conducted the research with David Howell from the Bodleian Libraries and Tim Zaman from the University of Delft.,,,

"Mr Snijders said: 'What’s interesting is that the text we’ve found doesn’t match that of other early Mixtec manuscripts. The genealogy we see appears to be unique, which means it may prove invaluable for the interpretation of archaeological remains from southern Mexico.'

"Some pages feature more than 20 characters sitting or standing in the same direction. Similar scenes have been found on other Mixtec manuscripts, representing a King and his council.

"The researchers analysed seven pages of the codex for this study and revealed other images including people walking with sticks and spears, women with red hair or headdresses and place signs containing the glyphs for rivers.

"The paints used to crate the vibrant images are organic and do not absorb X-rays, meaning traditional methods could not be used in trying to get a glimpse of the codex's fascinating stories.

"Working with the humanities division in the University of Oxford, the Bodleian acquired a hyperspectral scanner in 2014 with the support of the university’s Fell Fund – and the equipment was able to unmask the past.

"David Howell, head of heritage science at the Bodleian Libraries, said: 'This is very much a new technique, and we’ve learned valuable lessons about how to use hyperspectral imaging in the future both for this very fragile manuscript and for countless others like it.' " (http://www.oxfordmail.co.uk/news/14701472.Bodleian_boffins_uncover_images_of_rare_Mexican_manuscript_hidden_for_almost_500_years/, accessed 09-03-2016).

Researchers are continuing to analyse the remainder of the document with the aim of reconstructing the entire hidden imagery, allowing the text to be interpreted more fully.

The Codex Selden/Añute was first published by Edward King, Viscount Kingsborough in his ten volume series, Antiquities of Mexico (1831-1848).

Regarding the history of the codex I quote from John Pohl's Mesoamerica:

"John Selden died in 1654 but the last date associated with the genealogy in the manuscript is the Mixtec year 11 Flint which corresponds to A.D. 1556. A date on the cover of the manuscript (2 Flint) may correspond to 1560 (M.E. Smith 1994:122-123). How the codex got from the Mixteca-Alta, Oaxaca, into the hands of Selden remains a mystery. Smith thinks that Codex Selden was composed by the community of Jaltepec, located in the southern Nochixtlán Valley for presentation to Spanish and Indian authorities with regard to a dispute over a subject town.

The town in question was called Zahuatlán and it is represented in the codex as a hill sign qualified by a man dancing - to signify Zahuatlán’s Mixtec name "yucu nicata" or "Hill that Danced". Both Jaltepec and Yanhuitlán, a principal rival in the the northern Nochixtlán Valley, claimed the town. Lords and Ladies of Zahuatlán appear in the codex either paying homage, intermarrying, or being subjugated by Jaltepec. Since the painting of the codex was assuredly commissioned by Jaltepec, a better name for the manuscript is Codex Añute, Jaltepec’s Mixtec name."

(This entry was last revised on 09-02-2016).

## Hans Lippershey Invents the Telescope 1608

In 1608 German-Dutch lensmaker of Middelberg, Netherlands, Hans Lippershey created and disseminated designs for the first practical telescope.

"Crude telescopes and spyglasses may have been created much earlier, but Lippershey is believed to be the first to apply for a patent for his design (beating Jacob Metius by a few weeks), and making it available for general use in 1608. He failed to receive a patent but was handsomely rewarded by the Dutch government for copies of his design. The 'Dutch perspective glass', the telescope that Lippershey invented, could only magnify thrice.

"The first known mention of Lippershey's application for a patent for his invention appeared at the end of a diplomatic report on an embassy to Holland from the Kingdom of Siam sent by the Siamese king Ekathotsarot: Ambassades du Roy de Siam envoyé à l'Excellence du Prince Maurice, arrive a La Haye, le 10. septembr. 1608 ('Embassy of the King of Siam sent to his Excellence Prince Maurice, September 10, 1608'). The diplomatic report was soon distributed across Europe, leading to the experiments by other scientists such as the Italian Paolo Sarpi, who received the report in November, or the English Thomas Harriot in 1609, and Galileo Galilei who soon improved the device.

"One story behind the creation of the telescope states that two children were playing with lenses in his shop. The children discovered that images were clearer when seen through two lenses, one in front of the other. Lippershey was inspired by this and created a device very similar to today's telescope" (Wikipedia article on Hans Lippershey, accessed 03-27-2009).

While Sarpi and Harriot experimented with Lippershey's telescope prior or contemporaneously with Galileo, neither wrote or published on the subject.

(This entry was last revised on April 14, 2014.)

## Galileo Issues Images of Revolutionary Discoveries Concerning the Universe; and the Story of a Remarkable Forgery November 1609 – March 13, 1610

After learning in 1609 that a Dutchman, Hans Lippershey, had invented an instrument that made faraway objects appear closer, Italian scientist Galileo Galilei, a resident of Padua, applied himself to discovering the principle behind this instrument. By late in 1609 he built a telescope of about thirty power. This he probably first turned to the heavens in November or December 1609, with astronishing and revolutionary results. In contradiction to the doctrines of Aristotle and Ptolemy, which taught that the celestrial sphere and its planets and stars were perfect and unchanging, Galileo's telescope showed that the surface of the moon was rough and mountainous, and the Milky way was composed of thickly clustered stars. In November or December 1609 Galileo painted six watercolors on a notebook page showing the phases of the moon, as he observed them through the telescope. These images, on a sheet preserved in Florence, at the Biblioteca Nazionale Centrale (Ms. Gal. 48, f. 28r), were the first realistic images of the moon, and the first recorded images of bodies beyond the earth seen by man.

On the night of January 7, 2010 Galileo set up a telescope on his balcony in Padua. He spotted three stars near Jupiter, and noted their positions in a notebook. Six days later Galileo returned to his telescope and found the same stars, but by then their position had changed. At that point he realized that the three stars were moons orbiting Jupiter— proof that the universe of stars was not fixed, as postulated by Ptolemy's geocentric theory, and evidence for Copernicanism. Three months later Galileo's Sidereus Nuncius, or Starry Messenger, was published in Venice in an edition of 550 copies. The Sidereus Nuncius described and illustrated with copperplate engravings the first astronomical observations made through a telescope. Its images provided revolutionary new information about the universe. Though it contained only the bare facts of Galileo's observations without any overt reference to the Copernican theory, Sidereus Nuncius aroused a sensation among the European learned community, for it provided the first hard evidence that the Aristotelian-Ptolemaic view of the universe contained inaccuracies.

"He sent a copy of the book, along with the telescope he had been using, to the Grand Duke of Tuscany Cosimo II de’ Medici. Dr. [Owen] Gingerich said the pamphlet amounted to 'a job application' to the Medici family for whom, in one of history’s first examples of branding, Galileo named the four satellites of Jupiter. 'Other planets were gods or goddesses,' said Paolo Galluzzi, director of the Florence institute. 'The only humans with position in sky were Medicis.' The ploy worked, Cosimo II hired Galileo as his astronomer, elevating him from a poorly paid professor at the University of Padua to a celebrity, making the equivalent of $300,000, a year, Dr. Galluzzi said. Galileo returned the favor by giving Cosimo another telescope, clad in red leather and stamped with decorations" (Dennis Overbye, "A Telescope to the Past as Galileo Visits the U.S.", The New York Times, March 27, 2009.) It is thought that Galileo built dozens of telescopes, of which two survive, both in the Institute for the History of Science (Museo Galileo) in Florence, Italy. One covered in decorated leather, which Galileo sent to Grand Duke Cosimo II de' Medici, retains only one of its original lenses, but the other, covered only in varnished paper, contains its original functioning optics, and has its focal length labeled in Galileo's handwriting on the outside of its tube. This telescope was loaned to the Franklin Institute in Philadelphia for an exhibition from April to September 2009. (The online article in The New York Times included a video showing the original telescope being unpacked in Philadelphia.) ________ In June 2005 antiquarian bookseller Richard Lan (Martayan-Lan, Inc.) purchased a copy of the Sidereus nuncius from Marino Massimo De Caro and antiquarian bookseller Filippo Rotundo that was represented as a proof copy, signed by Galileo, originally from the library of Federico Cesi, founder of the Accademia dei Lincei. Instead of copperplate engraved illustrations as in other copies of the book, this copy contained watercolors of the phases of the moon similar to those which Galileo made at the end of 1609 and which are preserved in Florence. It was known that the Venetian printer had sent Galileo thirty copies with blank spaces indicating where etchings would be placed. Presumably this was one of those copies, in which Galileo had personally painted images for presentation to Federico Cesi, instead of having engravings printed in. The copy was examined by all the leading authorities, subjected to various tests, and was generally considered a unique proof copy. The Martayan Lan copy was included in the discussions in a symposium convened at the Library of Congress in November 2010 entitled "Galileo's Moons," intended to celebrate the 400th anniversary of the Sidereus Nuncius and the acquisition by the Library of Congress of an uncut copy of the first edition bound in the original limp paper boards. Papers presented at this symposium accepted the authenticity of the Martayan Lan copy. In 2011 De Gruyter published a rather grand 2-volume set, fully illustrated in color, based on research begun in 2007. Volume one, edited by Irene Brückle and Oliver Hahn, was entitled Galileo's Sidereus Nuncius. A comparison of the proof copy (New York) with other paradigmatic copies. Volume two, written by Paul Needham, was entitled Galileo Makes a Book. The First Edition of Sidereus Nuncius, Venice 1610. Regarding the significance of Needham's study, I quote from the review by G. Thomas Tanselle, Common Knowledge19, #3, (Fall 2013), 575-576: "Needham’s book is based on eighty-three other copies, and he draws as well on Galileo’s letters, drafts, and various external documents. The result is a detailed account of the early months of 1610, from January 15, when Galileo decided he must publish his discoveries, to March 13, when the printing was completed; an additional chapter discusses the book’s distribution and Galileo’s corrections in some copies. The task of bibliography, as stated by Needham, is to know “the materials and human actions that produced (in multiple copies) the structure of a printed book.” Systematically he takes up the paper, type, and format of Sidereus Nuncius and provides a quire-by-quire analysis of its production, making exemplary use of many techniques of bibliographical analysis, each patiently and clearly explained, with accompanying illustrations. The book could serve as an excellent introduction to this kind of work; but even more remarkably, it demonstrates how interconnected are the physical object and its intellectual content. The title sentence, “Galileo makes a book,” has a double meaning: not only did Galileo write the text, but he also attended to its physical production, making the presentation of the text integral to its meaning. Needham does not neglect Galileo’s writing itself: he calls Galileo “an artist with words,” whose “prose embodies not just close reasoning, but also life and emotion.” "This assessment applies equally to Needham’s own writing, which combines rigorous but readable technical analysis with an awareness of the human side of that work and the story it reveals. This combination recalls an earlier bibliographical classic, Allan Stevenson’s The Problem of the Missale Speciale (1967), another full-length treatment of a single book. Even the sense of humor displayed by Stevenson has its counterpart here: when, for example, Needham explains two hypotheses as to when the printing of Galileo’s book began, he calls the one that postulates a later date “the dilatory view.” At the end Needham praises the many nameless actors, such as papermakers and printing-shop workers, who played roles in the story; and he closes with “the mules and oxen whose humble labor moved sheets of Sidereus Nuncius across the face of Europe, under the eyes of the boundless sky.” This passage, occurring in a work of bibliographical analysis, epitomizes the work’s unusual accomplishment: it breaks new ground in the study of a major book, sets forth its discoveries in an engaging narrative, and in the process shows how bibliography can be essential to intellectual history." Until early 2012 Richard Lan was privately offering the copy for sale for$10,000,000. Then Nick Wilding, an historian of science at Georgia State University who had been asked to review the 2-volume set mentioned above, presented concrete proof that the Martayan-Lan copy was a forgery:

• The book bears a library stamp by the founder of the Accademia dei Lincei Federico Cesi. But the stamp in the Martayan Lan copy doesn’t match those in other books with Cesi's stamp.
• The title page was different from genuine copies, but bore similarities to a 1964 facsimile and an unsold Sotheby’s auction copy.
• There was no record of the Siderus Nuncius in the original library from which this copy was thought to come.

Slowly the thread of fabrication began to unravel. Discovery of the forgery coincided with the exposure of massive thefts of rare books from the Girolomini Library in Naples, for which Marino Massimo De Caro, and others were eventually convicted. In 2013 the Library of Congress and Levenger Press issued Galileo Galilei, The Starry Messenger, Venice, 1610. From Doubt to Astonishment. This volume contained a facsimile edition of the Library of Congress copy, an English translation, and the text of the papers delivered at the November 2010 symposium. However, as the editor of the volume noted, Paul Needham revised his paper (now retitled "Authenticity and Facsimile: Gaileo's Paper Trail") in light of his later acceptance that the Martayan Lan copy was a forgery. On December 16, 2013 The New Yorker magazine published a detailed background article on the forgery and how it was accomplished, by Nicholas Schmidel: "A Very Rare Book. The mystery surrounding a copy of Gaileo's pivotal treatise." While the article filled in many blanks concerning the Sidereus Nuncius forgery, it raised other questions concerning other unknown thefts and forgeries by Marino Massimo de Caro and his associates.

In February 2014 De Gruyter issued an originally unintended volume three of their 2011 two-volume set entitled A Galileo Forgery. Unmasking the New York Sidereus Nuncius, edited by Horst Bredekamp, Irëne Bruckel, and Paul Needham. When I last revised this entry in August 2014 the full text of the volume was available as an Open Access PDF at no charge. This was the most comprehensive account and proof of the forgery. In many ways it was the most remarkable and admirable volume of the set, in which the scholars, recounted how the forgery was discovered, drew their final conclusions proving the forgery, and explained how they had been deceived in the first place.

Hook & Norman, The Haskell F. Norman Library of Science and Medicine (1991) no. 855.

(This entry was last revised on 04-04-2015.)

## Robert Hooke's Graphic Portrayal of the Hitherto Unknown Microcosm 1665

In 1665 Robert Hooke published Micrographia: Or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses in London. This was the first book devoted entirely to microscopical observations, and also the first book to pair its microscopic descriptions with profuse and detailed illustrations. This graphic portrayal of the hitherto unknown microcosm had an impact rivalling that of Galileo's Sidereus nuncius (1610), which was the first book to include images of the macrocosm shown through the telescope. It was also the second book published under the auspices of the Royal Society of London.

Hooke began his observations with studies of non-living materials, such as woven cloth and frozen urine crystals, then proceeded to investigations of plant and animal life.  He published the first studies of insect anatomy, giving a lucid account of the compound eye of the fly, and illustrating the microscopic details of such structures as apian wings, flies' legs and feet, and the sting of the bee.  His famous and dramatic portraits of the flea and louse, a frightening eighteen inches long, are hardly less startling today than they must have been to Hooke's contemporaries.  His botanical observations include the first description of the plant-like form of molds, and of the honeycomb-like structure of cork, which last he described as being composed of "cellulae"— thereby coining the modern biological usage of the work "cell" to describe the basic microscopic units of tissue.

In January 2014 a digital facsimile of the first edition of Hooke's Micrographia was available from the National Library of Medicine's website at this link.

Hook & Norman, The Haskell F. Norman Library of Science and Medicine (1991) no. 1092.

## Georg Christoph Lichtenberg Describes "Lichtenberg Figures" 1777

German scientist, satirist and Anglophile Georg Christoph Lichtenberg discovered Lichtenberg figures, and described them in his memoir "Super nova methodo motum ac naturam fluidi electrici" investigandi," Göttinger Novi Commentarii, Göttingen, 1777.

"In 1777, Lichtenberg built a large electrophorus to generate high voltage static electricity through induction. After discharging a high voltage point to the surface of an insulator, he recorded the resulting radial patterns in fixed dust. By then pressing blank sheets of paper onto these patterns, Lichtenberg was able to transfer and record these images, thereby discovering the basic principle of modern Xerography. This discovery was also the forerunner of modern day plasma physics. Although Lichtenberg only studied 2-dimensional (2D) figures, modern high voltage researchers study 2D and 3D figures (electrical trees) on, and within, insulating materials. Lichtenberg figures are now known to be examples of fractals" (Wikipedia article on Lichtenberg figures, accessed 06-11-2010).

## The Earliest Surviving Photograph: A Process that Never "Caught On" 1826 – 1827

In 1826 or 1827 French inventor Nicéphore Niépce created View from the Window at Le Gras, the oldest surviving photograph, using the process of heliography that he had invented around 1822. The photograph shows parts of the buildings and surrounding countryside of his estate, Le Gras, in Saint-Loup-de-Varennes, as seen from a high window. The exposure is thought to have required from eight hours to several days.

"Niépce captured the scene with a camera obscura focused onto a 16.2 cm × 20.2 cm (6.4 in × 8.0 in) pewter plate coated with Bitumen of Judea, a naturally occurring asphalt. The bitumen hardened in the brightly lit areas, but in the dimly lit areas it remained soluble and could be washed away with a mixture of oil of lavender and white petroleum. A very long exposure in the camera was required. Sunlight strikes the buildings on opposite sides, suggesting an exposure that lasted about eight hours, which has become the traditional estimate. A researcher who studied Niépce's notes and recreated his processes found that the exposure must have continued for several days.

"In late 1827, Niépce visited England. He showed this and several other specimens of his work to botanical illustrator Francis Bauer, who encouraged him to present his "heliography" process to the Royal Society. Niépce was unwilling to reveal any specific practical details of his process, so the Royal Society declined his offer. Before returning to France, he gave Bauer the specimens and a draft of the remarks he had prepared to accompany his presentation. After Bauer's death in 1840, the specimens passed through several hands and were occasionally exhibited as historical curiosities. The View from the Window at Le Gras was last seen in 1905 and then fell into oblivion.

"Historian Helmut Gernsheim tracked down the photograph in 1952 and brought it to prominence, reinforcing the claim that Niépce is the inventor of photography. He had an expert at the Kodak Research Laboratory make a modern photographic copy, but it proved extremely difficult to produce an adequate representation of all that could be seen when inspecting the actual plate. Gernsheim heavily retouched one of the copy prints to clean it up and make the scene more comprehensible, and until the late 1970s he allowed only that enhanced version to be published. It was apparently at the time of the copying that the plate acquired disfiguring bumps near three of its corners, causing light to reflect in ways that interfere with the visibility of those areas and of the image as a whole.

"In 1963, Harry Ransom purchased most of Gernsheim's photography collection for The University of Texas at Austin, but the Niépce heliograph was not included in the sale. Shortly thereafter, Gernsheim donated it. Although it has rarely traveled since then, in 2012–2013 it visited Mannheim, Germany as part of an exhibition entitled The Birth of Photography—Highlights of the Helmut Gernsheim Collection. It is normally on display in the main lobby of the Harry Ransom Humanities Research Center in Austin, Texas " (Wikipedia article on View from the Window at Le Gras, accessed 10-24-2013).

Why then did Niépce's process never catch on? Why is the invention of photography typically credited to Louis Daguerre and William Henry Fox Talbot?  Clearly the extremely slow speed of developing the image had to be a factor. According to an email I received from historian of science William B. Ashworth, Jr. on March 7, 2014, there were other reasons:

"It is a convoluted, and sad, story.  Niépce travelled to England in 1827 to tend to his mentally ill brother, and he brought several heliographs with him.  He met people who were quite interested in his process, and he tried to make arrangements to give a demonstration to the Royal Society of London.  However, everything went wrong, and it really was no one's fault.  The Royal Society was practically dysfunctional at the time, as the president, Humphry Davy, was dying, and there was considerable scrambling to determine his successor.  John Herschel, who would be a photographic pioneer himself in the 1830s, was so disgusted with the Society that he resigned his position as secretary and refused to attend meetings.  The upshot was that the presentation never came to pass, and the people who would have been the most interested in Niépce’s demonstration, like Herschel, never met Niépce or saw his work.  Niépce returned home, his heliotypes still in his luggage, and although he lived until 1833, and collaborated at the end with Louis Daguerre, he gradually disappeared from public view.  When the Daguerrotype (a different type of photographic process) was first demonstrated to a revitalized Royal Society in 1839, Niépce's name was all but forgotten.  Niépce did all the right things, but he never reached the right people.  Had he made his trip to England a year earlier, or even a year later, he might have found a receptive audience, and the history of photography might have played out quite differently.  Life is like that, sometimes."

## Daguerreotypes: The First Commonly Used Photographic Process January 7 – August 19, 1839

On January 7, 1839 members of the Académie des Sciences first viewed examples of Daguerréotypes invented by the painter and printmaker, Louis-Jacques Daguerre.

On July 3, 1839 French  mathematician, physicist, astronomer and politician François Jean Dominique Arago made the first brief scientific announcement and explanation of Daguerre's process to the Chambre des députés. This he repeated to the Académie des sciences on August 19. Arago's report was published in the Comptes rendus IX (1839) 250-67.

Later in 1839 Daguerre published in Paris his first account of the process in a pamphlet called Historique et description des procédés du Daguerréotype et du diorama. Daguerre's method of fixing an image on a metal plate became the first commonly used photographic process. It produced a single positive image on a highly polished silver-plated sheet of copper.

## The First Separate Publication on Photography January 31, 1839

Upon learning about the exhibition of Daguerréotypes at the Académie des Sciences on January 7, 1839, English inventor William Henry Fox Talbot hastily read a paper on January 31 to the Royal Society entitled Some Account of the Art of Photogenic Drawing, or the Process by which Natural Objects may be made to Delineate Themselves with the Aid of the Artist's Pencil.

This paper, which Talbot had printed and distributed to friends as a pamphlet in February, 1839, was the first separate publication on photography.  In it Talbot suggested that fixed negatives might be used to produce multiple positive images.

In 1835 Talbot had developed a method of fixing negative images on paper previously made light-sensitive by successive coats of sodium chloride and silver nitrate, thus becoming the first to produce permanent paper negatives.

Gernsheim, The History of Photography (1969) Ch. 7, Gernsheim, Incunabula of British Photographic Literature (1984) no. 646. Hook & Norman, The Haskell F. Norman Library of Science and Medicine (1991) no. 2049.

## Perhaps the First "Selfie" Photograph Circa October 1839

In February 2014 a daguerreotype self-portrait taken by the American photography pioneer Robert Cornelius of Philadelphia was considered the first American photographic portrait of a human ever produced, and since this was a self-portrait, it was also possibly the first "selfie."

The daguerreotype is preserved in the Library of Congress, which produced this description:

"Daguerre announced his invention of a photographic method to the French Academy of Sciences in August 1839. That October, a young Philadelphian, Robert Cornelius, working out of doors to take advantage of the light, made this head-and-shoulders self-portrait using a box fitted with a lens from an opera glass. In the portrait, Cornelius stands slightly off-center with hair askew, in the yard behind his family's lamp and chandelier store, peering uncertainly into the camera. Early daguerreotypy required a long exposure time, ranging from three to fifteen minutes, making the process nearly impractical for portraiture. (Source: "Photographic Material," by Carol Johnson. In Gathering History: the Marian S. Carson Collection of Americana, 1999, p. 100)" (http://www.loc.gov/pictures/collection/dag/item/2004664436/, accessed 02-27-2014).

## The Basis for Blueprints 1842

In 1842 English mathematician, astronomer, chemist, and experimental photographer/inventor Sir John Herschel, invented the cyanotype, a photographic process that resulted in a cyan-blue print.

"The photosensitive compound, a solution of ferric ammonium citrate and potassium ferricyanide, is coated onto paper. Areas of the compound exposed to strong light are converted to insoluble blue ferric ferrocyanide, or Prussian blue. The soluble chemicals are washed off with water leaving a light-stable print."

The process was used through the 20th century by architects and engineers for the production of blueprints.

## Christian Doppler States the Doppler Principle (Doppler Shift, Doppler Effect) 1842

In 1842 Austrian mathematician and physicist at Czech Technical University in Prague Christian Andreas Doppler published Über das farbige Licht der Doppelsterne und einige andere Gestirne des Himmels. (On the Colored Light of the Binary Stars and Some Other Stars of the Heavens).

This was the first statement of the Doppler principle (Doppler shift, Doppler effect), which states that the observed frequency changes if either the observer or the source is moving. Doppler mentions the application of this principle to both acoustics and optics, particularly to the colored appearance of double stars and the fluctuations of variable stars and novae; however, his reasoning in the optical arguments was flawed by his erroneous belief that all stars were basically white and emitted light only or mostly in the visible spectrum. Five years later, the astronomer Hippolyte Fizeau will publish a paper announcing his independent discovery of the effect, noting the usefulness of observing spectral line shifts in its application to astronomy. This point was of such fundamental importance to Doppler's principle that it is sometimes called the Doppler-Fizeau principle. The acoustical Doppler effect was verified experimentally in 1845, and the optical effect in 1901. Modified by relativity theory, it became one of the major tools of astronomy. It also has numerous commerical applications beyond astronomy, such as in Doppler radar and in Doppler ultrasound imaging to evaluate blood flow.

## One of the Earliest Photographs of Books 1843 – 1844

William Henry Fox Talbot, one of the inventors of photography, photographed books in his library during 1843-1844. This was undoubtedly one of the earliest photographs of books. Fox Talbot later published this photograph in The Pencil of Nature

"An exceptional student first at Harrow and later at Cambridge, Talbot was a man of great learning and broad interests. Mathematics, astronomy, physics, botany, chemistry, Egyptology, philology, and the classics were all within the scope of his investigative appetite. The Philosophical Magazine, Miscellanies of Science, Botanische Schriften, Manners and Customs of the Ancient Egyptians, Philological Essays, Poetae Minores Graeci, and Lanzi's Storia pittorica dell'Italia are among the volumes represented in this photograph—truly an intellectual self-portrait. The image appeared as plate 8 in The Pencil of Nature. Paradoxically, A Scene in a Library was taken out of doors, where the light was stronger" (http://www.metmuseum.org/toah/works-of-art/2005.100.172, accessed 10-25-2011).

## The First Book Illustrated with Photographs October 1843 – 1853

In October 1843 Anna Atkins, an English amateur botanist and the first woman phtographer, published the first installment of Photographs of British Algae: Cyanotype Impressions. Atkins published this work privately with a handwritten text from her home in Sevenoaks, Kent, England. She issued a very small number of copies from cyanotypes contact printed by placing specimens directly onto coated paper, allowing the action of light to create a sillhouette effect. Photographs of British Algae was the first book illustrated with photographs, and the first serious application of photography to a scientific subject. The paper Atklns used for the first volume contains a watermark reading "Whatman Turkey Mill 1843." Atkins extended the work into three volumes, with the last part appearing in 1853.

## The Basis for Computed Tomography 1917

In 1917 Austrian mathematician Johann Radon, professor at Technische Universität Wien, introduced the Radon transform. He also demonstrated that the image of a three-dimensional object can be constructed from an infinite number of two-dimensional images of the object.

About sixty-five years later Radon's work was applied in the invention of computed tomography.

## The First Experimental Proof of General Relativity November 6, 1919

Among the experimental results predicted by Albert Einstein’s 1916 theory of general relativity was the bending of light by massive bodies due to the curvature of spacetime (space-time) in their vicinity. To test this prediction, Astronomer Royal Frank Watson Dyson and astronomer Arthur Stanley Eddington organized two expeditions—one to Principe Island off West Africa, and the other to Sobral in Brazil—for the purpose of observing the solar eclipse on May 29, 1919; the sun served as the “massive body,” and an eclipse was necessary in order to observe the light coming from other stars.

“The results were in agreement with Einstein’s prediction, the Sobral result being 1.98 ± 0.12 arcsec and the Principe result 1.61 ± 0.3 arcsec [about twice the amounts predicted by Newtonian theory]. Because of the technical difficulty of these observations, the precise value of the deflection remained a controversial issue, which was not laid to rest until the development of radio interferometric techniques in the 1970s” (Twentieth Century Physics III, 1722-23).

On November 6, 1919  Dyson reported to a joint meeting of the Royal Society and the Royal Astronomical Society concerning A Determination of the Deflection of Light by the Sun’s Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919. The paper, reproducing photographs of the eclipse made by Eddington, was published in the Philosophical Transactions of the Royal Society in 1920.

In response to the paper, the president of the Royal Society, Sir J.J.Thomson, said,

“This is the most important result obtained in connection with the theory of gravitation since Newton’s day, and it is fitting that it should be announced at a meeting of the society so closely connected with him. . . . The result [is] one of the highest achievements of human thought” (quoted by Pais, Subtle is the Lord, 305).

On November 7 confirmation of Einstein’s discovery was headlined in The Times of London, and on November 9 in The New York Times. This article was copied or adapted by newspapers all over the world, and it had the effect of turning Einstein, whose fame had previously been limited to the theoretical physics community, into a world-famous celebrity.  For the rest of his life Einstein remained the world’s most famous scientist, and relativity remained the puzzling, but fascinating subject that most people did not believe they could understand.

## Invention of the Iconoscope, the First Electronic Television Camera 1923

In 1923 Vladimir Zworykin, a Russian immigrant to the United States, working at Westinghouse Laboratories in Pittsburgh, patented the iconoscope, the first electronic television camera. His design, however, was incomplete:

"Vladimir Zworykin is also sometimes cited as the father of electronic television because of his invention of the iconoscope in 1923 and his invention of the kinescope in 1929. His design was one of the first to demonstrate a television system with all the features of modern picture tubes. His previous work with Rosing on electromechanical television gave him key insights into how to produce such a system, but his (and RCA's) claim to being its original inventor was largely invalidated by three facts: a) Zworykin's 1923 patent presented an incomplete design, incapable of working in its given form (it was not until 1933 that Zworykin achieved a working implementation), b) the 1923 patent application was not granted until 1938, and not until it had been seriously revised, and c) courts eventually found that RCA was in violation of the television design patented by Philo Taylor Farnsworth, whose lab Zworykin had visited while working on his designs for RCA.

"The controversy over whether it was first Farnsworth or Zworykin who invented modern television is still hotly debated today. Some of this debate stems from the fact that while Farnsworth appears to have gotten there first, it was RCA that first marketed working television sets, and it was RCA employees who first wrote the history of television. Even though Farnsworth eventually won the legal battle over this issue, he was never able to fully capitalize financially on his invention" (http://www.statemaster.com/encyclopedia/Colour-television, accessed 12-22-2009).

## A Massive Central Library on Microform for Printing on Demand 1925

In 1925 Robert B. Goldschmidt and Paul Otlet published "La Conservation et la diffusion internationale de la pensée" as publication no. 144 of the Institut international de bibliographie (Brussels). This work described their plans for a massive library where each volume existed as master negatives and positives on microform, and where items were printed on demand for interested patrons.

## Kodachrome, the First Color Transparency Film for Cinematography and Still Photography, is Developed 1935 – December 30, 2010

Kodachrome, the first color transparency film, was invented by musicians Leopold Godowsky, Jr. and Leopold Mannes. The project began even before the two young men graduated from high school. After viewing the 1917 film Our Navy in the early two-color additive color system, Prizma Color, Mannes and his friend Godowsky began experimenting with the use of colored filters and film, patenting a new process even before their high school graduation. They continued their experimentation and research while Mannes was studying physics and piano at Harvard and Godowsky was studying violin at UCLA. Eventually, with backing from an investor, the pair was able to convince Kodak of the value of their discoveries. In 1930, they moved to Kodak's Rochester headquarters, and within three years they developed the technique of three-color emulsion on which Kodachrome was based.

Kodachrome 16mm movie film was released for sale in 1935, and in 1936 Kodachrome 35mm still and 8mm movie film were released. To some Kodachrome was the best slide and movie film ever produced. Kodak produced the film and the chemicals required to develop Kodachrome from 1935 to 2009, by which time digital photography had, for the most part, replaced film photography.

According to the The New York Times, the last remaining roll of Kodachrome was developed on at Dwayne's Photo in Parsons, Kansas on December 30, 2010.

(This entry was last revised on 07-10-2014.)

## Chester Carlson invents Xerography; It Becomes Successful About 20 Years Later 1938 – 1949

In 1938 American physicist, inventor, and patent attorney Chester F. Carlson of Astoria, Queens, New York invented xerography, Originally called electrophotography, xerography did not become a commercial success until the wide adoption of the xerographic copier during the late 1950s.

In 1949 the Haloid Company of Rochester, New York introduced the Model A, the first commercial xerographic copier. Manually operated, it was also known as the Ox Box. An improved version, Camera #1, was introduced in 1950. The company renamed itself Haloid Xerox in 1958, and shortened its name to Xerox Corporation in 1961.

(This entry was last revised on 01-17-2015.)

## Otto Bettman Founds The Bettmann Archive: the Beginning of "The Visual Age" 1938

The Bettmann Archive, founded in New York in 1936 by Otto Bettmann, a refugee from Nazi Germany, contained 15,000 images by 1938.  Bettmann later characterized this period of time as "the beginning of the visual age." By 1980, the year before Bettmann sold the archive to the Kraus-Thomson Organization, the archive contained 2,000,000 images, carefully selected for their historical value, mainly under the five categories of world events, personalities, lifestyles, advertising art, and art and illustrations.

In 1984 the Kraus-Thomson Organization acquired the extensive United Press International (UPI) collection, containing millions of worldwide news and lifestyle photographs taken by photographers working for United Press International, International News Photos, Acme Newspictures, and Pacific and Atlantic.

In 1995 Corbis, a company controlled by Bill Gates, bought the Bettmann Archive.

"Beginning in 1997, Corbis spent five years selecting images of maximum historical value and saleability for digitization. More than 1.3 million images (26% of the collection) have been edited and 225,000 have been digitized. Because of this effort, more images from the Bettmann Archive are available now than ever before.

"In 2002, the Archive was moved to a state-of-the-art, sub-zero film preservation facility in western Pennsylvania. The 10,000-square-foot underground storage facility is environmentally-controlled, with specific conditions (minus -20°C, relative humidity of 35%) calculated to preserve prints, color transparencies, negatives, photographs, enclosures, and indexing systems" (http://www.corbis.com/BettMann100/Archive/Preservation.asp, accessed 01-17-2010).

## Sealing of the Crypt of Civilization May 25, 1940

On May 25, 1940 Presbyterian minister and president of Oglethorpe University in Brookhaven, GeorgiaThornwell Jacobs sealed the Oglethorpe Atlanta Crypt of Civilization in a cermony broadcast on Atlanta's WSB radio.  It was intended to be opened on May 28, 8113 CE.

Modelled after a chamber in an Egyptian pyramid, the Crypt of Civilization was a subterranean chamber, twenty feet long, ten feet wide, and ten feet high. Among the many elements of the time capsule were  microfilm media (film and thin metal) used to store written information, recorded sound, and moving pictures in the capsule. Apparently little or no print on paper material was included even though by the time of the creation of the capsule there was already sufficient evidence that print on paper, or writing on parchment, had survived for several thousand years, while microfilm or microform media was new and untested for durability.

"In this room had been a swimming pool, the foundation of which was impervious to water. The floor was raised with concrete with a heavy layer of damp proofing applied. The gallery's extended granite walls were lined with vitreous porcelain enamel embedded in pitch. The crypt had a two-foot thick stone floor and a stone roof seven feet thick. Jacobs consulted the Bureau of Standards in Washington for technical advice for storing the contents of the crypt. Inside would be sealed stainless steel receptacles with glass linings, filled with the inert gas of nitrogen to prevent oxidation or the aging process. A stainless steel door would seal the crypt."

"Articles on the crypt in the New York Times caught the attention of Thomas Kimmwood Peters (1884-1973), an inventor and photographer of versatile experience. Peters had been the only newsreel photographer to film the San Francisco earthquake of 1906. He had worked at Karnak and Luxor, Peters was also the inventor of the first microfilm camera using 35 millimeter film to photograph documents. In 1937 Jacobs appointed Peters as archivist of the crypt."

"From 1937 to 1940, Peters and a staff of student assistants conducted an ambitious microfilming project. The cellulose acetate base film would be placed in hermetically sealed receptacles. Peters believed, based on the Bureau of Standards testing, that the scientifically stored film would last for six centuries; he took however, as a method of precaution, a duplicate metal film, thin as paper. Inside the crypt are microfilms of the greatest classics, including the Bible, the Koran, the Iliad, and Dante's Inferno. Producer David O. Selznick donated an original copy of the script of 'Gone With the Wind.' There are more than 640,000 pages of microfilm from over eight hundred works on the arts and sciences. Peters also used similar methods for capturing and for storing still and motion pictures. Voice recordings of political leaders such as Hitler, Stalin, Mussolini, Chamberlain, and Roosevelt were included, as were voice recordings of Popeye the Sailor and a champion hog caller. To view and to hear these picture and sound records, Peters placed in the vault electric machines, microreaders, and projectors. In the event that electricity would not be in use in 8113 A.D., there is in the crypt a generator operated by a windmill to drive the apparatus as well as a seven power magnifier to read the microbook records by hand. The first item one would see upon entering the chamber is a thoughtful precaution-a machine to teach the English language so that the works would be more readily decipherable if found by people of a strange tongue.

"Thornwell Jacobs envisioned the crypt as a synoptic compilation and thus aimed for a whole 'museum' of not only accumulated formal knowledge of over six thousand years, but also 1930s popular culture. The list of items in the crypt is seemingly endless. All of the items were donated, with contributors as diverse as King Gustav V of Sweden and the Eastman Kodak Company. Some of the more curious items Peters included in the crypt were plastic toys - a Donald Duck, the Lone Ranger, and a Negro doll, as well as a set of Lincoln Logs. Peters also arranged with Anheuser Busch for a specially sealed ampule of Budweiser beer. The chamber of the crypt when finally finished in the spring of 1940, resembled a cell of an Egyptian pyramid, cluttered with artifacts on shelves and on the floor" (http://www.oglethorpe.edu/about_us/crypt_of_civilization/history_of_the_crypt.asp, accessed 04-22-2011).

## Using Microforms to Conserve Library Space 1944

In 1944 American writer, poet, editor, inventor, genealogist, librarian and director of Wesleyan's Olin Memorial Library Fremont Rider published The Scholar and the Future of the Research Library.

In this unusually well designed and produced book for its time Rider detailed the increasing shortage of space in research libraries, and described how his invention of the microcard, an opaque microform, would help to solve this problem. He also claimed that American research libraries were doubling in size every sixteen years—an assertion later proved incorrect.

## The First Phototypesetter 1947

In 1947 the Fotosetter, the first phototypesetter, was invented. The first phototypesetters were mechanical devices that replaced the metal type matrices with matrices carrying the image of the letters. They replaced the caster of hot metal typesetting machines with a photographic unit.

## Dennis Gabor Invents Holography 1947

In 1947 Hungarian electrical engineer and physicist Dennis Gabor, working at British Thomson-Houston, Rugby, England invented holography.

"Holography is a technique that allows the light scattered from an object to be recorded and later reconstructed so that it appears as if the object is in the same position relative to the recording medium as it was when recorded. The image changes as the position and orientation of the viewing system changes in exactly the same way as if the object was still present, thus making the recorded image (hologram) appear three dimensional. Holograms can also be made using other types of waves. The technique of holography can also be used to optically store, retrieve, and process information. While holography is commonly used to display static 3-D pictures, it is not yet possible to generate arbitrary scenes by a holographic volumetric display" (Wikipedia article on holography, accessed 04-26-2009).

## Edwin Land Demonstrates the Polaroid Land Camera Model 95, the First "Instant" Film Camera February 21, 1947

On February 21, 1947, American inventor Edwin H. Land, founder of Polaroid Corporation in Cambridge, Massachusetts, demonstrated an instant camera and associated film, called the Land Camera, Polaroid originally manufactured sixty units of this first camera, named the Polaroid Land Camera Model 95. It produced prints in about 1 minute. Fifty-seven were offered for sale at the Jordan Marsh department store in Boston before the 1948 Christmas holiday. Polaroid marketers guessed that the camera and film would remain in stock long enough to manufacture a second run based on customer demand, but all 57 cameras and all of the film were sold on the first day.

As I recall, my mother purchased a Model 95 and used it to take pictures of our young family. It was very exciting and convenient to see the image almost instantly after it was taken, compared to waiting several days or weeks to have film developed and printed. Over the years I owned and used several different later models of the camera. The technology was, of course, superceded by digital photography, but, like its larger cousin Kodak, Polaroid was slow to realize the extent of the disruption, and the final Polaroid "instant" film camera, the Polaroid One 600, was designed as late as 2004, before Polaroid Corporation folded in 2007.

Here is an early, and funny, commercial for the camera. Beneath that is film in which Land speaks about his portable camera in 1970 and his philosophy of instant imaging, which would occur after his death, after the invention of digital photography, and the incorporation of digital cameras into cell phones.

## The First Published Photographs of the Earth Taken From Space April 1947

Photography from the V-2 rocket at altitudes ranging up to 160 kilometers, by T. A. Bergstrahl,  N. R. L. report no. R-3083, issued by the Naval Research Laboratory, Washington, D.C. in April 1947 in an edition of only 47 copies, contains the first published photographs of the earth taken from space. The photographs, which show a large portion of the American southwest, were taken from cameras mounted on a V-2 (V2) rocket launched from the proving ground at White Sands, New Mexico. The rocket, which bore the number 21 but was the 20th V-2 launched at White Sands after number 1 misfired, was one of over 60 V-2 rockets captured from the Germans at the end of World War II in 1945. At that time the German rocketry program at Peenemunde was at least 20 years ahead of any other program. As part of Project Paperclip, the United States government brought both the captured V-2s and over 100 German rocketry experts (headed by Wernher von Braun) to America, where they began what became the U. S. space program. In 1946 the Upper Atmosphere Research Panel (also known as the V-2 panel) was formed to oversee a program of high-altitude experiments conducted using the V-2 rockets. On October 24, 1946 the research team was able to obtain photographs of the Earth taken from 65 miles above the surface; however, perhaps for quality reasons, these photographs were not published until 1950 (see Newell, High Altitude Rocket Research p. 288).

Berghstrahl's report announced that photographs were taken from more than 100 miles above the earth. “On 7 March 1947 the twentieth V-2 to be launched in America took to the air from the Army Ordnance Proving Ground at White Sands, New Mexico. As on several of the previous flights, an attempt was made to obtain photographs of the features of interest on the rocket and, of course, of the earth. In this attempt the effort met with considerable success. Included among the group of pictures obtained are the first ever to be taken from altitudes greater than 160 kilometers (100 miles). The quality of the photographs is fairly good. For the first time, in pictures taken at such high altitudes, it is possible to recognize clearly many geographical features. In addition a large number and variety of cloud formations were recorded by the cameras and other information of meteorological value” (p. 1).

Photographs 11 and 12 are especially notable. Number 11 includes an overlay showing landmarks in New Mexico, Arizona and the Gulf of California. The caption to number 12 states that “this picture covers approximately 500,000 square miles of southwestern United States and northern Mexico. The photographs [making up the composite] do not match exactly due to the varying camera angles.” Newell, High Altitude Rocket Research (1953), pp. 284-288. Krause, “High altitude research with V-2 rockets,” Proceedings of the American Philosophical Society 91 (1947) 430-446. Reichhart, “The first photo from space,” Air & Space Magazine, Smithsonian Institution, 1 Nov. 2006.

According to Reichhart, photography from V-2s launched at White Sands began on October 24, 1946, and there was a Universal newsreel on the topic issued in November 1946.

In 1947 the U.S. War Department produced a documentary film on the launching of V2 rockets from White Sands. The documentary excluded reference to photography done from the V-2 rockets:

## Rosalind Franklin's Photo #51 of Crystalline DNA May 2 – May 6, 1952

Between May 2 and May 6, 1952 English molecular biologist Rosalind Franklin, working at King's College, Cambridge took photograph No. 51 of the B-form of crystalline DNA. This was her finest photograph of the substance,  showing the characteristic X-shaped "Maltese cross" clearer than before.

About eight months later, on January 26, 1953, Franklin showed this photograph to physicist and molecular biologist Maurice Wilkins. Four days later, on January 30, 1953 Wilkins showed the photograph to James Watson.

The following day Watson asked laboratory director Lawrence Bragg if he could order model components from the Cavendish Laboratory machine shop. Bragg agreed. Watson's account of Franklin's photo 51 to Francis Crick confirmed that they had the vital statistics to build a B-form model: the photo confirmed the 20Å diameter, with a 3.4Å distance between bases. This, plus the repeat distance of 34Å, helix slope about 40°, and the likehood of 2 chains, not 3, seemed to be sufficient to build a model.

Franklin's file copy of Photograph 51, labeled in her handwriting, is preserved at the J. Craig Venter Institute.

## The Beginning of Positron Emission Tomography (PET) 1953

In 1953 William H. Sweet and Gordon L. Brownell at Massachusetts General Hospital, Boston, described the first positron imaging device, and and the first attempt to record three dimensional data in positron detection in their paper entitled "Localization of brain tumors with positron emitters',' Nucleonics XI (1953) 40-45. This was the beginning of positron emission tomography (PET).

"Despite the relatively crude nature of this imaging instrument, the brain images were markedly better than those obtained by other imaging devices. It also contained several features that were incorporated into future positron imaging devices. Data were obtained by translation of two opposed detectors using coincidence detection with mechanical motion in two dimensions and a printing mechanism to form a two-dimensional image of the positron source. This was our first attempt to record three-dimensional data in positron detection" (Brownell, A History of Positron Imaging [1999], accessed 12-25-2008)

## The Beginning of Medical Ultrasonography October 29, 1953

On October 29, 1953 Inge Edler and Carl Hellmuth Hertz at Lund University in Sweden obtained the first recording of the ultrasound echo from the heart. This was the beginning of echocardiography from which diagnostic sonography, or medical ultrasonography, evolved.

"The principle for echocardiography is as follows. The vibrations in a piezoelectric crystal create a beam of high frequency sound waves that are transmitted into the chest. When the waves pass an interface, such as between the heart wall and the surrounding area or the surface of a cardiac valve, some of the sound is reflected, creating an echo. The crystal is reset, enabling it to receive the echo. The longer it took for the echo to return to the crystal, the longer the distance between the crystal and the surface that was the source of the echo. The principle was the same as for sonar, used to measure the depth of water under a vessel, only in this case you measure the distance from the structure that is the source of the echo to the chest wall."

Edler, Inge & Hertz, Carl Hellmuth. The Use of the Ultrasonic Reflectoscope for Continuous Recording of the Movements of Heart Walls. K. Fysiogr. Sellsk. Lund. Foresch., 24 (1954) 1-19.

## Changes in Tissue Density Can be Computed 1956 – 1964

In work initiated at the University of Cape Town and Groote Schuur Hospital in early 1956, and continued briefly in mid-1957, South African-born American physicist Allen M. Cormack showed that changes in tissue density could be computed from x-ray data. His results were subsequently published in two papers:

"Representation of a Function by its Line Integrals, with Some Radiological Applications," Journal of Applied Physics 34 (1963) 2722-27; "Representation of a Function by its Line Integrals, with Some Radiological Applications. II," Journal of Applied Physics 35 (1964) 2908-13.

Because of limitations in computing power no machine was constructed during the 1960s. Cormack's papers generated little interest until Godfrey Hounsfield and colleagues invented computed tomography, and built the first CT scanner in 1971, creating a real application of Cormack's theories.

## Beginning of Doppler Ultrasound 1957

In 1957 Shigeo Satomura  of the Institute of Scientific and Industrial Research, Osaka University, demonstrated the application of the Doppler shift in the frequency of ultrasound backscattered by moving cardiac structures.

This was the beginning of doppler ultrasound for evaluating blood flow and pressure by bouncing high-frequency sound waves (ultrasound) off red blood cells.

S. Satomura, "Ultrasonic Doppler Method for the Inspection of Cardiac Functions," J. Accoust. Soc. Amer. 29 (1957) 1181-85.

## Invention of the Image Scanner; Creation of the First Digital Image 1957

In 1957 Russell A. Kirsch and a team at the U.S. National Bureau of Standards, using the SEAC computer, built the first image scanner—a drum scanner. Using that device they took the first digital photograph:

"The first image ever scanned on this machine was a 5 cm square photograph of Kirsch's then-three-month-old son, Walden. The black and white image had a resolution of 176 pixels on a side" (Wikipedia article on Image Scanner, accessed 04-01-2009).

## The First Obstetrical or Gynecological Sonograms 1958

In 1958 Ian Donald, Regius Professor of Midwifery at the University of Glasgow, and his colleagues John MacVicar, an obstetrician, and Tom Brown, an engineer, published a paper in The Lancet entitled "Investigation of Abdominal Masses by Pulsed Ultrasound." This article described their experience using an ultrasound scanner on 100 patients, and included 12 illustrations of various gynecologic disorders (eg, ovarian cysts, fibroids) as well as demonstration of obstetric findings such as the fetal skull at 34 weeks' gestation,"hydramnios" (polyhydramnios), and twins in breech presentation. The somewhat grainy and indistinct "Compound B-mode contact scanner" images were the first published obstetrical or gynecological sonograms.

J. M. Norman (ed),  Morton's Medical Bibliography 5th ed.(1991) no. 2682.

## The Corona Satellite Series: America's First Imagining Satellite Program June 1959 – May 31, 1972

In June 1959 KH-1, the first of the Corona series of American strategic imaging reconnaissance satellites was launched. Produced and operated by the Central Intelligence Agency Directorate of Science and Technology with assistance from the U.S. Air Force, the Corona satellites were used for photographic surveillance of the Soviet Union, the People's Republic of China and other areas. The 145th and last Corona satellite was launched on May 25, 1972 with its film recovered on May 31, 1972. Over its lifetime, CORONA provided photographic coverage totaling approximately 750,000,000 square miles of the earth’s surface.

"The Corona satellites used 31,500 feet (9,600 meters) of special 70 millimeter film with 24 inch (60 centimeter) focal length cameras. Initially orbiting at altitudes from 165 to 460 kilometers above the surface of the Earth, the cameras could resolve images on the ground down to 7.5 meters in diameter. The two KH-4 systems improved this resolution to 2.75 meters and 1.8 meters respectively, because they operated at lower orbital altitudes. . . .

"The first dozen or more Corona satellites and their launches were cloaked with disinformation as being part of a space technology development program called the Discoverer program. The first test launches for the Corona/Discoverer were carried out early in 1959. The first Corona launch containing a camera was carried out in June 1959 with the cover name Discoverer 4. This was a 750 kilogram satellite launched by a Thor-Agena rocket.

"The plan for the Corona program was for its satellites to return canisters of exposed film to the Earth in re-entry capsules, called by the slang term "film buckets", which were to be recovered in mid-air by a specially-equipped U.S. Air Force planes during their parachute descent. (The buckets were designed to float on the water for a short period of time for possible recovery by U.S. Navy ships, and then to sink if the recovery failed, via a water-dissolvable plug made of salt at the base of the capsule. This was for secrecy purposes.)" (Wikipedia article on Corona (satellite) accessed 11-29-2010).

"The return capsule of the Discoverer 13 mission, which launched August 10, 1960, was successfully recovered the next day. This was the first time that any object had been recovered successfully from orbit. After the mission of Discoverer 14, launch on August 18, 1960, its film bucket was successfully retrieved two days later by a C-119 Flying Boxcar transport plane. This was the first successful return of photographic film from orbit."

"CORONA enabled the US to specify verifiable terms of the Strategic Arms Limitation Treaty (SALT) with the Soviet Union in 1971. US negotiators confidently knew that photointerpreters could monitor changes in the size and characteristics of missile launchers, bombers, and submarines. Satellite imagery became the mainstay of the US arms-control verification process" (Central Intelligence Agency, CORONA: America's First Imaging Satellite Program, accessed 11-08-2014).

## The First Photograph of Earth from an Orbiting Satellite August 14, 1959

The first photograph of the earth from an orbiting satellite was taken by the U.S. Explorer 6 on August 14, 1959. The crude image shows a sun-lit area of the Central Pacific ocean and its cloud cover. The picture was made when the satellite was about 17,000 miles above the surface of the earth on August 14, 1959. At the time, the satellite was crossing Mexico. The signals were received at the tracking station at South Point, Hawaii (also known as Ka Lae).

(This entry was last revised on 11-08-2014.)

## The Xerox 914 September 16, 1959

Xerox 914.

On September 16, 1959 Haloid Xerox, Rochester, New York, introduced the Xerox 914, the first successful commercial plain paper xerographic copier, roughly the size of a desk.

". . .  commercial models were not available until March 1960. The first machine, delivered to a Pennsylvania metal-fastener maker, weighed nearly 650 pounds. It needed a carpenter to uncrate it, an employee with 'key operator' training, and its own 20-amp circuit. In an episode of Mad Men, set in 1962, the arrival of the hulking 914 helps get Peggy Olson her own office, after she tells her boss, 'It’s hard to do business and be credible when I’m sharing with a Xerox machine' " (http://www.theatlantic.com/magazine/archive/2010/07/the-mother-of-all-invention/8123/, accessed 06-11-2010).

## The TIROS 1 Satellite Transmits the First Television Picture from Space April 1, 1960

On April 1, 1960 the first Television InfraRed Observation Satellite (TIROS 1), the first successful low-Earth orbital weather satellite, was launched by NASA from Cape Canaveral, Florida. That day the satellite transmitted the first television picture of the earth from space.

## The First to Create Three-Dimensional Images of the Human Body Using a Computer 1964

"Boeing Man" or "Human Figure," a wireframe drawing printed on a Gerber Plotter.  It was used as a standard figure of a pilot.

In 1964 William A. Fetter, an art director at The Boeing Company in Seattle, Washington, supervised development of a  computer program that allowed him to create the first three-dimensional images of the human body through computer graphics. Using this program Fetter and his team produced the first computer model of a human figure for use in the study of aircraft cockpit design. It was called the “First Man” or "Boeing Man." Though Fetter's wire frame drawings could be called commercial art, they were of a high aesthetic standard.

Herzogenrath & Nierhoff-Wielk, Ex-Machina–Frühe Computergrafik bis 1979. Die Sammlunge Franke. . . . Ex-Machina– Early Computer Graphics up to 1979 (2007) 239.

## Bitzer & Willson Invent the First Plasma Video Display (Neon Orange) 1964

In 1964 Donald Bitzer, H. Gene Slottow, and Robert Willson at the University of Illinois at Urbana-Champaign invented the first plasma video display for the PLATO Computer System.

The display was monochrome neon orange and incorporated both memory and bitmapped graphics. Built by Owens-Illinois glass, the flat panels were marketed under the name "Digivue."

## Woodrow Bledsoe Originates of Automated Facial Recognition 1964 – 1966

From 1964 to 1966 Woodrow W. "Bledsoe, along with Helen Chan and Charles Bisson of Panoramic Research, Palo Alto, California, researched programming computers to recognize human faces (Bledsoe 1966a, 1966b; Bledsoe and Chan 1965). Because the funding was provided by an unnamed intelligence agency, little of the work was published. Given a large database of images—in effect, a book of mug shots—and a photograph, the problem was to select from the database a small set of records such that one of the image records matched the photograph. The success of the program could be measured in terms of the ratio of the answer list to the number of records in the database. Bledsoe (1966a) described the following difficulties:

" 'This recognition problem is made difficult by the great variability in head rotation and tilt, lighting intensity and angle, facial expression, aging, etc. Some other attempts at facial recognition by machine have allowed for little or no variability in these quantities. Yet the method of correlation (or pattern matching) of unprocessed optical data, which is often used by some researchers, is certain to fail in cases where the variability is great. In particular, the correlation is very low between two pictures of the same person with two different head rotations.'

"This project was labeled man-machine because the human extracted the coordinates of a set of features from the photographs, which were then used by the computer for recognition. Using a GRAFACON, or RAND TABLET, the operator would extract the coordinates of features such as the center of pupils, the inside corner of eyes, the outside corner of eyes, point of widows peak, and so on. From these coordinates, a list of 20 distances, such as width of mouth and width of eyes, pupil to pupil, were computed. These operators could process about 40 pictures an hour. When building the database, the name of the person in the photograph was associated with the list of computed distances and stored in the computer. In the recognition phase, the set of distances was compared with the corresponding distance for each photograph, yielding a distance between the photograph and the database record. The closest records are returned.

"This brief description is an oversimplification that fails in general because it is unlikely that any two pictures would match in head rotation, lean, tilt, and scale (distance from the camera). Thus, each set of distances is normalized to represent the face in a frontal orientation. To accomplish this normalization, the program first tries to determine the tilt, the lean, and the rotation. Then, using these angles, the computer undoes the effect of these transformations on the computed distances. To compute these angles, the computer must know the three-dimensional geometry of the head. Because the actual heads were unavailable, Bledsoe (1964) used a standard head derived from measurements on seven heads.

"After Bledsoe left PRI [Panoramic Research, Inc.] in 1966, this work was continued at the Stanford Research Institute, primarily by Peter Hart. In experiments performed on a database of over 2000 photographs, the computer consistently outperformed humans when presented with the same recognition tasks (Bledsoe 1968). Peter Hart (1996) enthusiastically recalled the project with the exclamation, 'It really worked!' " (Faculty Council, University of Texas at Austin, In Memoriam Woodrow W. Bledsoe, accessed 05-15-2009).

Bledsoe, W. W. 1964. The Model Method in Facial Recognition, Technical Report PRI 15, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W., and Chan, H. 1965. A Man-Machine Facial Recognition System-Some Preliminary Results, Technical Report PRI 19A, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W. 1966a. Man-Machine Facial Recognition: Report on a Large-Scale Experiment, Technical Report PRI 22, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W. 1966b. Some Results on Multicategory Patten Recognition. Journal of the Association for Computing Machinery 13(2):304-316.

Bledsoe, W. W. 1968. Semiautomatic Facial Recognition, Technical Report SRI Project 6693, Stanford Research Institute, Menlo Park, California.

## Aaron Klug Invents Digital Image Processing 1966

In 1966 English molecular biologist Aaron Klug at the University of Cambridge formulated a method for digital image processing of two-dimensional images.

A. Klug and D. J. de Rosier, “Optical filtering of electron micrographs: Reconstruction of one-sided images,” Nature 212 (1966): 29-32.

## Cyrus Levinthal Builds the First System for Interactive Display of Molecular Structures 1966

In 1966, using the Project MAC, an early time-sharing system at MIT, Cyrus Levinthal built the first system for the interactive display of molecular structures

"This program allowed the study of short-range interaction between atoms and the "online manipulation" of molecular structures. The display terminal (nicknamed Kluge) was a monochrome oscilloscope (figures 1 and 2), showing the structures in wireframe fashion (figures 3 and 4). Three-dimensional effect was achieved by having the structure rotate constantly on the screen. To compensate for any ambiguity as to the actual sense of the rotation, the rate of rotation could be controlled by globe-shaped device on which the user rested his/her hand (an ancestor of today's trackball). Technical details of this system were published in 1968 (Levinthal et al.). What could be the full potential of such a set-up was not completely settled at the time, but there was no doubt that it was paving the way for the future. Thus, this is the conclusion of Cyrus Levinthal's description of the system in Scientific American (p. 52):

It is too early to evaluate the usefulness of the man-computer combination in solving real problems of molecular biology. It does seems likely, however, that only with this combination can the investigator use his "chemical insight" in an effective way. We already know that we can use the computer to build and display models of large molecules and that this procedure can be very useful in helping us to understand how such molecules function. But it may still be a few years before we have learned just how useful it is for the investigator to be able to interact with the computer while the molecular model is being constructed.

"Shortly before his death in 1990, Cyrus Levinthal penned a short biographical account of his early work in molecular graphics. The text of this account can be found here."

In January 2014 two short films produced with the interactive molecular graphics and modeling system devised by Cyrus Levinthal and his collaborators in the mid-1960s was available at this link.

## NCR Issues the Smallest Published Edition of the Bible, and the First to Reach the Moon 1966

In 1966 the Research and Development department of National Cash Register (NCR) of Dayton, Ohio produced an edition of all 1245 pages of  the World Publishing Company's No. 715 Bible on a single 2" x 1-3/4" photochromatic microform (PCMI) The microform contained both the Old Testament on 773 pages and the New Testament on 746 pages, and was issued in a paper sleeve with title on the cover and information about the process inside and on the back.

On the microform each page of double column Bible text was about 0.5 mm wide and 1 mm high. Each text character was 8 um high (ie 8/1000ths of a millimeter). NCR noted on the paper wallet provided with the microform that this represented a linear reduction of about 250:1 or an area reduction of 62,500:1. This would correspond to the original text being circa 2 mm high. To put this into perspective, NCR also noted that if this reduction was used on the millions of books on the 270+ miles of shelving in the Library of Congress, the entire Library of Congress as it existed in 1966 could be stored in six standard filing cabinets.

♦ In 1971 Apollo 14 lunar module pilot Edgar D. Mitchell carried 100 of the microform bibles aboard the lunar module Antares, as confirmed by NASA's official manifest. Launched January 31, 1971, Mitchell and the bibles reached the Fra Mauro formation of the Moon on February 5 aboard the Antares before returning to the command module for the voyage back to Earth. This was the first edition of the Bible to reach the Moon, and probably the first book of any kind of reach the moon and return. A second parcel containing 200 microform Bibles flew in Edgar Mitchell's command module "PPK" bag in lunar orbit, and did not land. These 200 copies represented extra Bibles to be used if something happened to the lunar module copies.

## Stephen A. Benton Invents the Rainbow Hologram or Benton Hologram 1968

In 1968 Stephen A. Benton, then of Polaroid Corporation, and later at MIT's Media Lab, invented the Benton hologram or rainbow hologram, a hologram designed to be viewed under white light illumation rather than laser light, which was required to view holograms before this invention.

"The rainbow holography recording process uses a horizontal slit to eliminate vertical parallax in the output image, greatly reducing spectral blur while preserving three-dimensionality for most observers. A viewer moving up or down in front of a rainbow hologram sees changing spectral colors rather than different vertical perspectives. Stereopsis and horizontal motion parallax, two relatively powerful cues to depth, are preserved. The holograms found on credit cards are examples of rainbow holograms" (Wikipedia article on rainbow hologram, accessed 11-23-2012).

## Aaron Klug Invents Three-Dimensional Image Processing January 1968

In January 1968 English molecular biologist Aaron Klug described techniques for the reconstruction of three-dimensional structures from electron micrographs, thus founding the processing of three-dimensional digital images.

D. J. de Rosier and A. Klug, “Reconstruction of three dimensional structures from electron micrographs,” Nature 217 (1968) 130-34.

## Cybernetic Serendipity: The First Widely-Attended International Exhibition of Computer Art August 2 – October 20, 1968

From August 2  to October 20, 1968 Cybernetic Serendipity: The Computer and the Arts was exhibited at the Institute of Contemporary Arts in London, curated by British art critic, editor, and Assistant Director of the Institute of Contemporary Arts Jasia Reichardt, at the suggestion of Max Bense. This was the first widely attended international exhibition of computer art, and the first exhibition to attempt to demonstrate all aspects of computer-aided creative activity: art, music, poetry, dance, sculpture, animation.

In the video below Jasia Reichardt introduced the exhibition:

"It drew together 325 participants from many countries; attendance figures reached somewhere between 45,000 and 60,000 (accounts differ) and it received wide and generally positive press coverage ranging from the Daily Mirror newspaper to the fashion magazine Vogue. A scaled-down version toured to the Corcoran Gallery in Washington DC and then the Exploratorium, the museum of science, art and human perception in San Francisco. It took Reichardt three years of fundraising, travelling and planning" (Mason, a computer in the art room. the origins of british computer arts 1950-80 [2008] 101-102)

For the catalogue of the show Reichardt edited a special issue of Studio International magazine, consisting of 100 pages with 300 images, publication of which coincided with the exhibition in 1968. The color frontispiece reproduced a color computer graphic by the American John C. Mott-Smith "made by time-lapse photography successively exposed through coloured filters, of an oscilloscope connected to a computer." The cover of the special issue was designed by the Polish-British painter, illustrator, film-maker, and stage designer Franciszka Themerson, incorporating computer graphics from the exhibition. Laid into copies of the special issue were 4 leaves entitled "Cybernetic Serendipity Music," each page providing a program for one of eight tapes of music played during the show. This information presumably was not available in time to be printed in the issue of Studio International.

Reichardt's Introduction  (p. 5) included the following:

"The exhibition is divided into three sections, and these sections are represented in the catalogue in a different order:

"1. Computer-generated graphics, computer-animated films, computer-composed and -played music, and computer poems and texts.

"2. Cybernetic devices as works of art, cybernetic enironments, remoted-control robots and painting machines.

"3. Machines demonstrating the uses of computers and an environment dealing with the history of cybernetics.

"Cybernetic Sernedipity deals with possibilites rather than achievements, and in this sense it is prematurely optimistic. There are no heroic claims to be made because computers have so far neither revolutionized music, nor art, nor poetry, the same way that they have revolutionized science.

"There are two main points which make this exhibition and this catalogue unusual in the contexts in which art exhibitions and catalogues are normally seen. The first is that no visitor to the exhibition, unless he reads all the notes relating to all the works, will know whether he is looking at something made by an artist, engineer, mathematician, or architect. Nor is it particularly important to know the background of all the makers of the various robots, machines and graphics- it will not alter their impact, although it might make us see them differently.

"The other point is more significant.

"New media, such as plastics, or new systems such as visual music notation and the parameters of concrete poetry, inevitably alter the shape of art, the characteristics of music, and content of poetry. New possibilities extend the range of expression of those creative poeple whom we identify as painters, film makers, composers and poets. It is very rare, however, that new media and new systems should bring in their wake new people to become involved in creative activity, be it composiing music drawing, constructing or writing.

"This has happened with the advent of computers. The engineers for whom the graphic plotter driven by a computer represented nothing more than a means of solving certain problems visually, have occasionally become so interested in the possibilities of this visual output, that they have started to make drawings which bear no practical application, and for which the only real motives are the desire to explore, and the sheer pelasure of seeing a drawing materialize. Thus people who would never have put pencil to paper, or brush to canvas, have started making images, both still and animated, which approximate and often look identical to what we call 'art' and put in public galleries.

"This is the most important single revelation of this exhibition."

Some copies of the special issue were purchased by Motif Editions of London.  Those copies do not include the ICA logo on the upper cover and do not print the price of 25s. They also substitute two blanks for the two leaves of ads printed in the back of the regular issue. They do not include the separate 4 leaves of programs of computer music.  These special copies were sold by Motif Editions with a large  (75 x 52 cm) portfolio containing seven 30 x 20 inch color lithographs with a descriptive table of contents. The artists included Masao Komura/Makoto Ohtake/Koji Fujino (Computer Technique Group); Masao Komura/Kunio Yamanaka (Computer Technique Group); Maugham S. Mason, Boeing Computer Graphics; Kerry Starnd, Charles "Chuck" Csuri/James Shaffer & Donald K. Robbins/ The art works were titled respectively 'Running Cola is Africa', 'Return to Square', 'Maughanogram', 'Human Figure', 'The Snail', 'Random War' & '3D Checkerboard Pattern'.  Copies of the regular edition contained a full-page ad for the Motif Editions portfolio for sale at £5 plus postage or £1 plus postage for individual prints.

## Flickr, the Photo & Video Sharing Social Networking Site, is Launched February 2004

The Flickr logo

The Flickr homepage interface

In February 2004 Flickr, the photo and video sharing and photo and video social networking site, was launched by Ludicorp, a Vancouver, Canada, based company founded by Stewart Butterfield and Caterina Fake. It emerged out of tools originally created for Ludicorp's Game Neverending, a web-based massively multiplayer online game. Its organizational  tools allowed photos to be tagged and browsed by folksonomic means.

Ludicorp and Flickr were purchased by Yahoo in March 2005.

"Yahoo reported in June 2011 that Flickr had a total of 51 million registered members and 80 million unique visitors. In August 2011 the site reported that it was hosting more than 6 billion images and this number continues to grow steadily according to reporting sources." (Wikipedia article on Flickr, accessed 03-23-2012).

## Image Manipulation in Scientific Publications July 6, 2004

An issue of the Journal of Cell Biology

On July 6, 2004 The Journal of Cell Biology began screening digital images submitted with electronic manuscripts to determine whether these images were manipulated in ways that misrepresented experimental results. The image-screening system that checked for image manipulation took 30 minutes per paper.

## Google Earth is Launched 2005

An image of earth using the Google Earth program

Keyhole EarthViewer 3D

In 2005 Google launched Google Earth, a virtual globe, map and geographical information program, which mapped the Earth by the superimposition of images obtained by satellite. The program, which Google acquired when it purchased Keyhole, Inc., was originally called EarthViewer 3D.

## The "Selfie" Social Media Phenomenon Circa 2005

"In the early 2000s, before Facebook became the dominant online social network, self-taken photographs were particularly common on MySpace. However, writer Kate Losse recounts that between 2006 and 2009 (when Facebook became more popular than MySpace), the "MySpace pic" (typically "an amateurish, flash-blinded self-portrait, often taken in front of a bathroom mirror") became an indication of bad taste for users of the newer Facebook social network. Early Facebook portraits, in contrast, were usually well-focused and more formal, taken by others from distance. In 2009 in the image hosting and video hosting website Flickr, Flickr users used 'selfies' to describe seemingly endless self-portraits posted by teenage girls. According to Losse, improvements in design—especially the front-facing camera copied by the iPhone 4 (2010) from Korean and Japanese mobile phones, mobile photo apps such as Instagram, and selfie sites such as ItisMee—led to the resurgence of selfies in the early 2010s.

"Initially popular with young people, selfies gained wider popularity over time. By the end of 2012, Time magazine considered selfie one of the "top 10 buzzwords" of that year; although selfies had existed long before, it was in 2012 that the term "really hit the big time". According to a 2013 survey, two-thirds of Australian women age 18–35 take selfies—the most common purpose for which is posting on Facebook. A poll commissioned by smartphone and camera maker Samsung found that selfies make up 30% of the photos taken by people aged 18–24.

"By 2013, the word "selfie" had become commonplace enough to be monitored for inclusion in the online version of the Oxford English Dictionary. In November 2013, the word "selfie" was announced as being the "word of the year" by the Oxford English Dictionary, which gave the word itself an Australian origin.

"Selfies have also taken beyond the earth. A space selfie is a selfie that is taken in space. This include selfies taken by astronauts, machines and by an indirect method to have self-portrait photograph on earth retaken in space" (Wikipedia article on Selfie, accessed 02-27-2014).

## Connectomes: Elements of Connections Forming the Human Brain September 30, 2005

Olaf Sporns

Giulio Tononi

Neuroscientists Olaf Sporns of Indiana University, Giulio Tononi of the University of Wisconsin, and Rolf Köttler of Heinrich Heine University, Düsseldorf, Germany, published "The Human Connectome: A Structural Description of the Human Brain," PLoS Computational Biology I (4). This paper and the PhD thesis of Patric Hagmann from the Université de Lausanne, From diffusion MRI to brain connectomics, coined the term connectome:

In their 2005 paper  Sporns et al. wrote:

"To understand the functioning of a network, one must know its elements and their interconnections. The purpose of this article is to discuss research strategies aimed at a comprehensive structural description of the network of elements and connections forming the human brain. We propose to call this dataset the human 'connectome,' and we argue that it is fundamentally important in cognitive neuroscience and neuropsychology. The connectome will significantly increase our understanding of how functional brain states emerge from their underlying structural substrate, and will provide new mechanistic insights into how brain function is affected if this structural substrate is disrupted."

In his 2005 Ph.D. thesis, From diffusion MRI to brain connectomics, Hagmann wrote:

"It is clear that, like the genome, which is much more than just a juxtaposition of genes, the set of all neuronal connections in the brain is much more than the sum of their individual components. The genome is an entity it-self, as it is from the subtle gene interaction that [life] emerges. In a similar manner, one could consider the brain connectome, set of all neuronal connections, as one single entity, thus emphasizing the fact that the huge brain neuronal communication capacity and computational power critically relies on this subtle and incredibly complex connectivity architecture" (Wikipedia article on Connectome, accessed 12-28-2010).

## Pixar at MOMA December 14, 2005

The Pixar logo

A poster for Pixar at the Moma

The Moma

On December 14, 2005 the Museum of Modern Art (MoMA), New York, opened PIXAR: 20 Years of Animation:

"The Most Extensive Gallery Exhibition that MoMA has ever devoted to Animation along with a Retrospective of Pixar Features and Shorts."

Notably MoMA found it unnecessary to characterize the exhibition as "computer animation" since by this time virtually all animation was done by computer. They published a 175 page printed catalogue of the exhibition.

## Disney Acquires Pixar January 24, 2006

The Pixar version of the Disney logo, used in Pixar movies

The Pixar logo, including a number of popular Pixar characters

Steve Jobs

The Walt Disney Company, born in the days of manual animation, acquired Pixar, the computer animation company, making Steve Jobs the largest Disney stockholder.

## 92% of Cameras Sold are Digital February 2006

The Canon A530, considered by many to be one of the best digital cameras available in 2006

By some estimates 92 percent of all cameras sold in 2006 were digital.

## Yahoo and Reuters Found "YouWitnessNews" December 5, 2006

The Reuters logo

The You Witness News logo

On December 5, 2006 Yahoo and Reuters introduced programs to place photographs and videos of news events submitted by the public, including cell phone photos and videos, throughout Reuters.com and Yahoo's new service entitled YouWitnessNews. Reuters said that it in 2007 would also start to distribute some of the submissions to the thousands of print, online and broadcast media outlets that subscribed to its news service. Reuters also said that it hoped to develop a service devoted entirely to user-submitted photographs and video.

## Photosynth Demonstrated March 2007

Blaise Agüera y Arcas

The Photosynth interface

In March 2007 physicist and software engineer Blaise Agüera y Arcas, architect of Seadragon, and co-creator of Photosynth, demonstrated Photosynth in a video dowloadable at the TED website at this link.

Using techniques of computational bibliography, in collaboration with Paul Needham at Princeton's Scheide Library, Agüera y Arcas also did significant original research in the technology of the earliest printing from movable type.

## Google Introduces Street View in Google Maps May 25, 2007 – May 12, 2008

Google Street View image of St Johns Street in Manchester UK showing 8 different possible views

An exmple of blurred faces in Google Street View

One of the vehicles used to record the images for Google Street View

On May 25, 2007 Google introduced the Street View feature of Google Maps in the United States.  It provided panoramic views from positions along many streets, eventually including even views of the very small road on which I live in Novato, California, suggesting that coverage of many parts of the United States became extremely comprehensive.

On April 16, 2008, Google fully integrated Street View into Google Earth 4.3.

In response to complaints about privacy, on May 12, 2008 Google announced in its "latlong" blog that it had introduced face-blurring technology for its images of Manhattan. It eventually applied the technology to all locations.

## Brainbow: A Colorful Technique to Visualize Brain Circuitry November 2007

Jeff W. Lichtman

Joshua R. Sanes

Three brainbows of mouse neurons from Lichtman and Sanes, 2008.

A. A motor nerve innervating ear muscle

B. An axon tract in the brain stem

C. The hippocampul dentate gyrus

In November 2007 Jeff W. Lichtman and Joshua R. Sanes, both professors of Molecular & Cellular Biology in the Department of Neurobiology at Harvard Medical School, and colleagues, published "Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system," Nature 450 (7166): 56–62. doi:10.1038/nature06293. This described the visualization process they called "Brainbow."

"Detailed analysis of neuronal network architecture requires the development of new methods. Here we present strategies to visualize synaptic circuits by genetically labelling neurons with multiple, distinct colours. In Brainbow transgenes, Cre/lox recombination is used to create a stochastic choice of expression between three or more fluorescent proteins (XFPs). Integration of tandem Brainbow copies in transgenic mice yielded combinatorial XFP expression, and thus many colours, thereby providing a way to distinguish adjacent neurons and visualize other cellular interactions. As a demonstration, we reconstructed hundreds of neighbouring axons and multiple synaptic contacts in one small volume of a cerebellar lobe exhibiting approximately 90 colours. The expression in some lines also allowed us to map glial territories and follow glial cells and neurons over time in vivo. The ability of the Brainbow system to label uniquely many individual cells within a population may facilitate the analysis of neuronal circuitry on a large scale." (From the Nature abstract).

## ImageNet, an Image Database and Ontology 2008

In 2008 Principal Investigators Li Fei-Fei of Stanford Vision Lab and Kai Li of the Department of Computer Science at Princeton, and associates, advisors and friends, began building ImageNet, an image database and ontology, through a crowdsourcing process. In October 2013 the database contained 14,197,122 images, with 21,841 synsets indexed.

The ImageNet database is organized according to the WordNet hierarchy.

"Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a 'synonym set' or 'synset'. There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy."

Among its many applications, ImageNet provides a standard by which the accuracy of image recognition software can be measured.

## Viewing the Illustrations of a Journal Article in Three Dimensions September 30, 2008

On September 30, 2008 the Optical Society and the National Library of Medicine announced Interactive Science Publishing.

" 'ISP' represents a new direction for OSA publications. The ISP articles, which appear in OSA journals, link out to large 2D and 3D datasets—such as a CT scan of the human chest—that can be viewed interactively with special software developed by OSA in cooperation with Kitware, Inc., and the National Library of Medicine."

## First Images of Extra-Solar Planets Taken from the Visible Spectrum: Planets Located 130 Light-Years from Earth November 13, 2008

On November 13, 2008 NASA and the Lawrence Livermore National Laboratory announced the first-ever pictures taken from the visible spectrum of extrasolar planets. The images were glimpsed by the Gemini North and Keck telescopes on the Mauna Kea mountaintop in Hawaii.

"British and American researchers snapped the first ever visible-light pictures of three extrasolar planets orbiting the star HR8799.  HR8799 is about 1.5 times the size of the sun, located 130 light-years away in the Pegasus constellation.  Observers can probably see this star through binoculars, scientists said.

"To identify the planets, researchers compared images of the system, known to contain planets HF8799b, HF8799c, and HF8799d.  In each image faint objects were detected, and by comparing images from over the years, it was confirmed that these were the planets in their expected positions and that they orbit their star in a counterclockwise direction.

"NASA's Hubble Space Telescope at about the same time picked up images of a fourth planet, somewhat unexpectedly.  The new planet, Fomalhaut b orbits the bright southern star Fomalhaut, part of the constellation Piscis Australis (Southern Fish) and is relatively massive -- about three times the size of Jupiter.  The planet orbits 10.7 billion miles from its home star and is approximately 25 light-years from Earth."  (quoations from Daily Tech November 16, 2008).

## Google Earth Incorporates Historical Imagery February 2, 2009

On February 2, 2009 Google launched Google Earth 5.0. Among the most significant features were Historical Imagery, Touring, and 3D Mars.

" ♦ Historical Imagery: Until today, Google Earth displayed only one image of a given place at a given time. With this new feature, you can now move back and forth in time to reveal imagery from years and even decades past, revealing changes over time. Try flying south of San Francisco in Google Earth and turning on the new time slider (click the "clock" icon in the toolbar) to witness the transformation of Silicon Valley from a farming community to the tech capital of the world over the past 50 years or so.

" ♦ Touring: One of the key challenges we have faced in developing Google Earth has been making it easier for people to tell stories. People have created wonderful layers to share with the world, but they have often asked for a way to guide others through them. The Touring feature makes it simple to create an easily sharable, narrated, fly-through tour just by clicking the record button and navigating through your tour destinations.

" ♦ 3D Mars: This is the latest stop in our virtual tour of the galaxies, made possible by a collaboration with NASA. By selecting "Mars" from the toolbar in Google Earth, you can access a 3D map of the Red Planet featuring the latest high-resolution imagery, 3D terrain, and annotations showing landing sites and lots of other interesting features" (Official Google Blog, http://googleblog.blogspot.com/2009/02/dive-into-new-google-earth.html, accessed 11-29-2010).

## The Human Connectome Project July 2009

The Human Connectome Project, a five-year project sponsored by sixteen components of the National Institutes of Health (NIH) split between two consortia of research institutions, was launched as the first of three Grand Challenges of the National Institutes of Health's Blueprint for Neuroscience Research

The project was described as "an ambitious effort to map the neural pathways that underlie human brain function. The overarching purpose of the Project is to acquire and share data about the structural and functional connectivity of the human brain. It will greatly advance the capabilities for imaging and analyzing brain connections, resulting in improved sensitivity, resolution, and utility, thereby accelerating progress in the emerging field of human connectomics. Altogether, the Human Connectome Project will lead to major advances in our understanding of what makes us uniquely human and will set the stage for future studies of abnormal brain circuits in many neurological and psychiatric disorders" (http://www.humanconnectome.org/consortia/, accessed 12-28-2010).

## Imaging a Molecule One Million Times Smaller Than a Grain of Sand August 28, 2009

On August 28, 2009 IBM Research – Zurich scientists Leo Gross, Fabian Mohn, Nikolaj Moll and Gerhard Meyer, in collaboration with Peter Liljeroth of Utrecht University, published "The Chemical Structure of a Molecule Resolved by Atomic Force Microscopy," Science, 2009; 325(5944): 1110 DOI: 10.1126/science.1176210

Using an atomic force microscope operated in an ultrahigh vacuum and at very low temperatures ( –268oC or – 451oF) the scientists imaged the chemical structure of individual pentacene molecules. For the first time ever, they were able to look through the electron cloud and see the atomic backbone of an individual molecule.

The abstract of the article is:

"Resolving individual atoms has always been the ultimate goal of surface microscopy. The scanning tunneling microscope images atomic-scale features on surfaces, but resolving single atoms within an adsorbed molecule remains a great challenge because the tunneling current is primarily sensitive to the local electron density of states close to the Fermi level. We demonstrate imaging of molecules with unprecedented atomic resolution by probing the short-range chemical forces with use of noncontact atomic force microscopy. The key step is functionalizing the microscope’s tip apex with suitable, atomically well-defined terminations, such as CO molecules. Our experimental findings are corroborated by ab initio density functional theory calculations. Comparison with theory shows that Pauli repulsion is the source of the atomic resolution, whereas van der Waals and electrostatic forces only add a diffuse attractive background."

♦ In December 2013 a video of the scientists discussing and explaining this discovery at IBM's Press Room was available at this link.

## David Hockney's iPhone Art October 22, 2009

On October 22, 2009 Lawrence Wechler, director of the New York Institute for the Humanities at New York University,  published "David Hockney's iPhone Passion," New York Review of Books LXVI, no. 16, 35.

Hockney had a history of exploiting new technologies in his art:

"Hockney continued to explore other media besides painting, most notably photography. From 1982-86, he created some of his best-known and most iconographic work — his “joiners,” large composite landscapes and portraits made up of hundreds or thousands of individual photographs. Hockney initially used a Polaroid camera for the photos, switching to a 35 mm camera as the works grew larger and more complex. In interviews, Hockney related the “joiners” to cubism, pointing out that they incorporate elements that a traditional photograph does not possess — namely time, space, and narrative.

"Always willing to adopt new techniques, in 1986 Hockney began producing art with color photocopiers. He has also incorporated fax machines (faxing art to an exhibition in Brazil, for example) and computer-generated images (most notably Quantel Paintbox, a computer system often used to make graphics for television shows) into his work" (http://www.pbs.org/wnet/americanmasters/episodes/david-hockney/the-colors-of-music/103/, accessed 01-09-2010).

On Cember 8, 2009 Google introduced Google Goggles image recognition and search technology for the Android mobile device operating system.  If you photographed certain types of individual objects with your mobile phone the program would recognize them and automatically display links to relevant information on the Internet.If you pointed your phone at a building the program would identify it by GPS and identify it. Then if you clicked on the name of the building it would bring up relevant Internet links.

♦ On May 7, 2010 you could watch a video describing the features of Google Goggles at this link:

## The Vatican Library Plans the Scanning of all its Manuscripts into the FITS Document Format March 24, 2010

"An initiative of the Vatican Library Digital manuscripts

"by Cesare Pasini

"The digitization of 80,000 manuscripts of the Vatican Library, it should be realized, is not a light-hearted project. Even with only a rough calculation one can foresee the need to reproduce 40 million pages with a mountain of computer data, to the order of 45 petabytes (that is, 45 million billion bytes). This obviously means pages variously written and illustrated or annotated, to be photographed with the highest definition, to include the greatest amount of data and avoid having to repeat the immense undertaking in the future.

"And these are delicate manuscripts, to be treated with care, without causing them damage of any kind. A great undertaking for the benefit of culture and in particular for the preservation and conservation of the patrimony entrusted to the Apostolic Library, in the tradition of a cultural service that the Holy See continues to express and develop through the centuries, adapting its commitment and energy to the possibilities offered by new technologies.

"The technological project of digitization with its various aspects is now ready. In the past two years, a technical feasibility study has been prepared with the contribution of the best experts, internal, external and also international. This resulted in a project of a great and innovative value from various points of view: the realization of the photography, the electronic formats for conservation, the guaranteed stability of photographs over time, the maintenance and management of the archives, and so forth.

"This project may be achieved over a span of 10 years divided into three phases, with possible intervals between them. In a preliminary phase the involvement of 60 people is planned, including photographers and conservator-verifiers, in the second and third phases at least 120. Before being able to initiate an undertaking of this kind, which is causing some anxiety to those in charge of the library (and not only to them!), naturally it will be necessary to find the funds. Moves have already been made in this direction with some positive results.

"The second announcement is that some weeks ago the “test bed” was set up; in other words the “bench test” that will make it possible to try out and examine the whole structure of the important project that has been studied and formulated so as to guarantee that it will function properly when undertaken in its full breadth.

"The work of reproduction uses two different machines, depending on the different types of material to be reproduced: one is a Metis Systems scanner, kindly lent to us free of charge by the manufacturers, and a 50 megapixel Hasselblad digital camera. Digitized images will be converted to the Flexible Image Transport System (FITS), a non-proprietary format, is extremely simple, was developed a few decades ago by NASA. It has been used for more than 40 years for the conservation of data concerning spatial missions and, in the past decade, in astrophysics and nuclear medicine. It permits the conservation of images with neither technical nor financial problems in the future, since it is systematically updated by the international scientific community.

"In addition to the servers that collect the images in FITS format accumulated by the two machines mentioned, another two servers have been installed to process the data to make it possible to search for images both by the shelf mark and the manuscript's descriptive elements, and also and above all by a graphic pattern, that is, by looking for similar images (graphic or figurative) in the entire digital memory.

"The latter instrument, truly innovative and certainly interesting for all who intend to undertake research on the Vatican's manuscripts – only think of when it will be possible to do such research on the entire patrimony of manuscripts in the Library! – was developed from the technology of the Autonomy Systems company, a leading English firm in the field of computer science, to which, moreover, we owe the entire funding of the “test bed”.

"For this “bench test”, set up in these weeks, 23 manuscripts are being used for a total of 7,500 digitized and indexed pages, with a mountain of computer data of about 5 terabytes (about 5,000 billion bytes).

"The image of the mustard seed springs to mind: the “text bed” is not much more in comparison with the immensity of the overall project. But we know well that this seed contains an immense energy that will enable it to grow, to become far larger than the other plants and to give hospitality to the birds of the air. In accepting the promise guaranteed in the parable, let us also give hope of it to those who await the results of this project's realization" (http://www.vaticanlibrary.va/home.php?, pag=newsletter_art_00087&BC=11, accessed 03-24-2010).

## Google Acknowledges that it Collected Wi-Fi Information Along with Cartographic and Imaging Information April 27 – June 10, 2010

"Over the weekend, there was a lot of talk about exactly what information Google Street View cars collect as they drive our streets. While we have talked about the collection of WiFi data a number of times before--and there have been stories published in the press--we thought a refresher FAQ pulling everything together in one place would be useful. This blog also addresses concerns raised by data protection authorities in Germany.

"What information are your cars collecting?

"We collect the following information--photos, local WiFi network data and 3-D building imagery. This information enables us to build new services, and improve existing ones. Many other companies have been collecting data just like this for as long as, if not longer, than Google.

"♦Photos: so that we can build Street View, our 360 degree street level maps. Photos like these are also being taken by TeleAtlas and NavTeq for Bing maps. In addition, we use this imagery to improve the quality of our maps, for example by using shop, street and traffic signs to refine our local business listings and travel directions;

"♦WiFi network information: which we use to improve location-based services like search and maps. Organizations like the German Fraunhofer Institute and Skyhook already collect this information globally;

"♦and 3-D building imagery: we collect 3D geometry data with low power lasers (similar to those used in retail scanners) which help us improve our maps. NavTeq also collects this information in partnership with Bing. As does TeleAtlas.

"What do you mean when you talk about WiFi network information?

"WiFi networks broadcast information that identifies the network and how that network operates. That includes SSID data (i.e. the network name) and MAC address (a unique number given to a device like a WiFi router).

"Networks also send information to other computers that are using the network, called payload data, but Google does not collect or store payload data.*

"But doesn’t this information identify people?

"MAC addresses are a simple hardware ID assigned by the manufacturer. And SSIDs are often just the name of the router manufacturer or ISP with numbers and letters added, though some people do also personalize them. However, we do not collect any information about householders, we cannot identify an individual from the location data Google collects via its Street View cars.

"Is it, as the German DPA states, illegal to collect WiFi network information?

"We do not believe it is illegal--this is all publicly broadcast information which is accessible to anyone with a WiFi-enabled device. Companies like Skyhook have been collecting this data cross Europe for longer than Google, as well as organizations like the German Fraunhofer Institute.

"Why did you not tell the DPAs that you were collecting WiFi network information?

"Given it was unrelated to Street View, that it is accessible to any WiFi-enabled device and that other companies already collect it, we did not think it was necessary. However, it’s clear with hindsight that greater transparency would have been better.

"Why is Google collecting this data?

"The data which we collect is used to improve Google’s location based services, as well as services provided by the Google Geo Location API. For example, users of Google Maps for Mobile can turn on “My Location” to identify their approximate location based on cell towers and WiFi access points which are visible to their device. Similarly, users of sites like Twitter can use location based services to add a geo location to give greater context to their messages.

"Can this data be used by third parties?

"Yes--but the only data which Google discloses to third parties through our Geo Location API is a triangulated geo code, which is an approximate location of the user’s device derived from all location data known about that point. At no point does Google publicly disclose MAC addresses from its database (in contrast with some other providers in Germany and elsewhere).

"Do you publish this information?

On June 9, 2010 Google announced in its Official Blog that it had "mistakenly included code" in its software that collected "samples of payload data" from unencrypted WiFi networks, but not from encrypted WiFI networks.  It also announced that in response to requests from the Irish Data Protection Authority it was deleting payload data collected from Irish WiFi networks.

## Google Introduces a Translation Feature for Google Goggles May 6, 2010

On May 6, 2010 Google announced a translation feature for Google Goggles, image recognition and search feature available on Android-based mobile devices.

"Here’s how it works:

"Point your phone at a word or phrase. Use the region of interest button to draw a box around specific words Press the shutter button

"If Goggles recognizes the text, it will give you the option to translate

"Press the translate button to select the source and destination languages."

"Today Goggles can read English, French, Italian, German and Spanish and can translate to many more languages. We are hard at work extending our recognition capabilities to other Latin-based languages. Our goal is to eventually read non-Latin languages (such as Chinese, Hindi and Arabic) as well."

## "The First Image of the Entire Universe" July 5, 2010

From roughly 1,000,000 miles into space, on July 5, 2010 the European Space Agency's Planck space observatory took the first photograph of the entire universe.

## NCBI Introduces Images, a Database of More than 2.5 Million Images in Biomedical Literature October 2010

In October 2010 the National Center for Biotechnology Information (NCBI), a division of the National Library of Medicine (NLM) at the National Institutes of Health (NIH), introduced Images, an online database of more than 2.5 million images and figures from medical and life sciences journals.

## Instagram is Founded October 2010 – December 17, 2012

In October 2010 Kevin Systrom and Cheyenne Foster launched Instagram, an online photo-sharing and social networking service that enabled users to take a picture, apply a digital filter to it, and share on a variety of networking services, including its own. Instagram was purchased in April 2012 by Facebook for approximately $1 billion in cash and stock. After regulatory approval the deal closed in September 2012 by which time Instagram had over 100 million users. "On December 17, 2012, Instagram updated its Terms of Service to allow Instagram the right to sell users' photos to third parties without notification or compensation after January 16, 2013. The criticism from privacy advocates, consumers and even National Geographic which suspended its Instagram account, prompted Instagram to issue a statement retracting the controversial terms. Instagram is currently working on developing new language to replace the disputed terms of use" (Wikipedia article on Instagram, accessed 12-22-2012). ## The First MRI Video of Childbirth November 2010 – June 2012 In November 2010 the first video of a woman giving birth in an open MRI machine was taken at the Charité Hospital in Berlin, Germany. The team led by Christian Bamberg, M.D. first published the results as "Human birth observed in real-time open magnetic resonance imaging," in the American Journal of Obstetrics & Gynecology in January 2012. Supplementary material, including the video of the final 45 minutes of labor, was published as Vol. 206, issue, pp. 505.e1-505e6, June 2012. ## Google Earth 6: Enhanced 3D, 3D Trees, Enhanced Historical Imagery November 30, 2010 Google Earth 6, introduced on November 30, 2010, enabled the user to "fly from outer space down to the streets with the new Street View and easily navigate. . . . Switch to ground-level view to see the same location in 3D." The program also introduced 3D trees in locations all over the world, and a more user-friendly interface for the historical imagery enabling comparison of recent and historical satellite imagery when available. ## The Google Earth Engine December 2, 2010 On December 2, 2010 Google introduced the Google Earth Engine, a cloud computing platform for processing satellite imagery and other Earth observation data. The engine provides access to a large warehouse of satellite imagery and the computational power needed to analyze those images. Initial applications of the platform included mapping the forests of Mexico, identifying water in the Congo basin, and detecting deforestation in the Amazon. "Google Earth Engine brings together the world's satellite imagery—trillions of scientific measurements dating back more than 25 years—and makes it available online with tools for scientists, independent researchers, and nations to mine this massive warehouse of data to detect changes, map trends and quantify differences to the earth's surface" (http://earthengine.googlelabs.com/#intro). "On February 11, [2013] NASA launched Landsat 8, the latest in a series of Earth observation satellites which started collecting information about the Earth in 1972. We're excited to announce that on May 30th, the USGS began releasing operational data from the Landsat 8 satellite, which are now available on Earth Engine. Explore the gallery below to see how we've used Landsat data to visualize thirty years of change across the entire planet. Congratulations to NASA and USGS for a successful launch!" (http://earthengine.google.org/#intro, accessed 10-20-2013). ## Scanning Books in Libraries Instead of Making Photocopies 2011 Ristech, the motto of which was "Automation of Digitization," introduced the Book2net Spirit, which they described as: "the very first entry level high resolution book scanner. The Spirit is designed to replace photocopies in Public, Government and Corporate Libraries. By eliminating the need for paper, toner and maintenance – Libraries can reduce cost. The Spirit can easily be attached to a cost recovery system or coin-op to generate revenue. "Key Features: • Public Use Walk-up BookScanner • High Resolution Images • 1 second image capture • Scan to USB or Email • Embedded Touch Screen PC included" ## Probably the Largest Digital Image January 13, 2011 The Sloan Digital Sky Survey-III (SDSS-III), a major multi-filter imaging and spectroscopic redshift survey using a dedicated 2.5-m wide-angle optical telescope at Apache Point Observatory, Sunspot, New Mexico, released the largest digital color image of the sky assembled from millions of 2.8 megapixel images, and consisting of more than a trillion pixels. This may be the largest digital image produced to date. ## The Google Art Project February 1, 2011 Bringing technology developed for Street View indoors, Google introduced The Art Project. Simultaneously they introduced an Art Project channel on YouTube. These projects allowed you to take virtual tours of major museums, view relevant background material about art, store high resolution images, share images and commentaries with friends. Each of the 17 museums involved also chose one artwork to be photographed using gigapixel photo capturing technology, resulting in an image on the computer containing seven billion pixels and providing detail not visible to the naked eye. ## The Largest Interior Image: The Strahov Monastery Library March 29, 2011 360cities.net posted a 40 gigabyte panorama of the baroque Philosophical Hall containing 42,000 volumes in the Strahov Monastery Library in Prague. The spectacular image is particularly useful since tourists visiting the monastery may only glimpse this library room from one roped-off entrance. When the image was posted on YouTube and on 360cities.net it was the largest interior panoramic image taken to date, showing all aspects of the room in the smallest detail. ♦ An article published in Wired magazine on March 29, 2011 provided production details, multiple images, and a video showing how the panorama was created. ## Snapchat: Communication and Automatic Destruction of Information September 2011 In September 2011 Stanford University students Evan Spiegel and Robert Murphy produced the initial release of the photo messaging application Snapchat, famously launching the program "from Spiegel's father's living room." Users of the app take photos, record videos, add text and drawings, and send them to a controlled list of recipients. Photographs and videos sent through the app are known as "Snaps". Users set a time limit for how long recipients can view their Snaps, after which the photos or videos are hidden from the recipient's device and deleted from Snapchat's servers. In December 2013 the range was from 1 to 10 seconds. In November 2013 it was reported that Snapchat was sharing 400 million photos per day—more than Facebook. "Founder Evan Spiegel explained that Snapchat is intended to counteract the trend of users being compelled to manage an idealized online identity of themselves, which he says has "taken all of the fun out of communicating". Snapchat can locate a user's friends through the user's smartphone contact list. Research conducted in the UK has shown that, as of June 2013, half of all 18 to 30-year-old respondents (47 percent) have received nude pictures, while 67 percent had received images of "inappropriate poses or gestures". "Snapchat launched the "Snapchat Stories" feature in early October 2013 and released corresponding video advertisements with the tagline "It's about time." The feature allows users to create links of shared content that can be viewed an unlimited number of times over a 24-hour period. The "stories" are simultaneously shared with the user's friends and content remains for 24 hours before disappearing. "Another controversy surrounding the rising popularity of Snapchat in the United States relates to a phenomenon known as sexting. This involves the sending and receiving of explicit images that often involve some degree of nudity. Because the application is commonly used by younger generations, often below the age of eighteen, the question has been raised whether or not certain users are technically distributing child pornography. For this reason, many adults disapprove of their children's use of the application. Snapchat's developers continue to insist that the application is not sexting-friendly and that they do not condone any kind of pornographic use. "On November 14, 2013, police in LavalQuebec, Canada arrested 10 boys aged 13 to 15 on child pornography charges after the boys allegedly captured and shared explicit photos of teenage girls sent through Snapchat as screenshots. "In February 2013, a study by market research firm Survata found that mobile phone users are more likely to "sext over SMS than over Snapchat" (Wikipedia article on Snapchat, accessed 12-12-2013). ### 2012 – 2016 ## NYPL Labs Introduces the Stereogranimator January 2012 In January 2012 NYPL Labs, the digital library development division of the New York Public Library, introduced the Stereogranimator, a website and collaborative program to turn digital copies of analog stereographic photograph pairs into shareable 3D web formats. "Stereographs, produced by the millions between the 1850s and the 1930s, were a wildly popular form of entertainment, giving viewers a taste of the kind of richly rounded images now readily available on screens of all sizes. No motion was involved, however. Instead, viewers looked through a stereoscope at two slightly different photographs of the same scene, which the brain was tricked into perceiving as a single three-dimensional image. "The Stereogranimator . . . uses GIF animation to create the illusion of three-dimensionality by flickering back and forth between the two images. Users can adjust the speed, as well as the spatial jump between the images. The tool also generates an old-fashioned anaglyph, one of those blurry, two-toned images that snap into rounded focus when viewed through a stereoscope or vintage blue-red 3-D glasses. . . ." (http://artsbeat.blogs.nytimes.com/2012/01/26/3-d-it-yourself-thanks-to-new-library-site/, accessed 11-02-2013). The Stereogranimator grew out of a project originated by writer / photographer Joshua Heineman, who in 2008 observed that "The parallax effect of minor changes between the two perspectives created a sustained sense of dimension that approximated the effect of stereo viewing. When I realized how the effect was working, I set about discovering if I could capture the same illusion by layering both sides of an old stereograph in Photoshop & displaying the result as an animated gif. The effect was more jarring than through a stereoscope but no less magic" (http://stereo.nypl.org/about, accessed 11-02-2013). ## Google Introduces the Knowledge Graph May 16, 2012 "The Knowledge Graph enables you to search for things, people or places that Google knows about—landmarks, celebrities, cities, sports teams, buildings, geographical features, movies, celestial objects, works of art and more—and instantly get information that’s relevant to your query. This is a critical first step towards building the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do. "Google’s Knowledge Graph isn’t just rooted in public sources such as Freebase, Wikipedia and the CIA World Factbook. It’s also augmented at a much larger scale—because we’re focused on comprehensive breadth and depth. It currently contains more than 500 million objects, as well as more than 3.5 billion facts about and relationships between these different objects. And it’s tuned based on what people search for, and what we find out on the web. "The Knowledge Graph enhances Google Search in three main ways to start: "1. Find the right thing Language can be ambiguous—do you mean Taj Mahal the monument, or Taj Mahal the musician? Now Google understands the difference, and can narrow your search results just to the one you mean—just click on one of the links to see that particular slice of results: "2. Get the best summary With the Knowledge Graph, Google can better understand your query, so we can summarize relevant content around that topic, including key facts you’re likely to need for that particular thing. For example, if you’re looking for Marie Curie, you’ll see when she was born and died, but you’ll also get details on her education and scientific discoveries: "3. Go deeper and broader Finally, the part that’s the most fun of all—the Knowledge Graph can help you make some unexpected discoveries. You might learn a new fact or new connection that prompts a whole new line of inquiry. Do you know where Matt Groening, the creator of the Simpsons (one of my all-time favorite shows), got the idea for Homer, Marge and Lisa’s names? It’s a bit of a surprise: "We’ve always believed that the perfect search engine should understand exactly what you mean and give you back exactly what you want. And we can now sometimes help answer your next question before you’ve asked it, because the facts we show are informed by what other people have searched for. For example, the information we show for Tom Cruise answers 37 percent of next queries that people ask about him. In fact, some of the most serendipitous discoveries I’ve made using the Knowledge Graph are through the magical “People also search for” feature. One of my favorite books is The White Tiger, the debut novel by Aravind Adiga, which won the prestigious Man Booker Prize. Using the Knowledge Graph, I discovered three other books that had won the same prize and one that won the Pulitzer. I can tell you, this suggestion was spot on!" ## A 3D Virtual Reality Reader for eBooks October 2012 In October 2012 the Münchener Digitalisierungs Zentrum of the Bayerische Staatsbibliothek, München (Munich Digitization Center of the Bavarian State Library in Munich) introduced the 3D-BSB Explorer, a gesture-controlled 3D Interactive Book Reader developed jointly by the center and the Fraunhofer Heinrich Hertz Institute. "For the first time ever, magnificent over one thousand year old books are also on view in a digital 3D format at the "Magnificent Manuscripts – Treasures of Book Illumination" exhibition at the Kunsthalle of the Hypo Cultural Foundation in Munich. The Interactive 3D BookReader forms part of the exhibition which opens on Friday, 19 October 2012 at the Kunsthalle of the Hypo Cultural Foundation in Munich. "Allowing visitors to leaf through volumes illuminated in gold and encrusted with precious stones is something that most museums simply cannot permit. Secure in their glass cases, these exhibits seem remote and untouchable. Yet with the Interactive 3D BookReader, developed by the Fraunhofer Heinrich Hertz Institute in partnership with the Bavarian State Library, visitors can now not only view digitalized books in 3D without any need for special glasses, but browse through them, enlarge them and rotate them as well. The Interactive 3D BookReader opens up virtual access to these magnificent treasures of the art of illumination. Visitors don’t even need to touch the screen as an infrared camera captures the movements of one or more of their fingers while image processing software identifies their position in space in real-time. This is how they can move, browse, rotate and scale the exhibits shown on the screen. Even the slightest of finger movements can be translated into movements of the cursor. The monitor screen of the Interactive 3D BookReader shows the user's right and left eye two slightly offset images which combine to give an in-depth impression. The two stereo views are adapted to correspond to the viewer's actual position. This means that visitors don't need special 3D glasses to view the books in three dimensions" (http://www.hhi.fraunhofer.de/media/press/experience-magnificent-books-in-digital-3d.html, accessed 02-23-2013). In February 2013 a video demonstration of the 3D-BSB Explorer was available on YouTube at this link: http://www.youtube.com/watch?v=LpSP2ojWtIs&feature=youtu.be ## The First 3D Photo Booth Prints Personal Miniature Figures November 12, 2012 – August 9, 2013 On November 12, 2012 designboom.com reported on a limited edition pop-up installation developed by the Japanese firm omote3D.com that reproduces personal detailed miniature action figures. "ranging from 10 to 20 centimetres in height, the system utilizes a three-dimensional camera and printer to process and scan users, creating custom scale reproductions. The three-step procedure requires the user to keep still for 15 minutes while the scanners capture the data" (http://www.designboom.com/art/personal-action-figures-printed-at-a-japanese-photo-booth/, accessed 08-11-2013). On August 9, 2013 designboom.com reported on an expansion of the concept developed and commercialized by Twinkind.com in Hamburg, Germany. "ever imagined a true-to-life miniature version of yourself? well - now it's possible. these 3D printed portrait figurines by twinkind are made using state-of-the art 3D scanning and color printing technology. the miniatures are available to anyone who can make it to twinkind's studio in hamburg, with a 15cm tall figure costing €225 and a 35cm model coming in at €1290. several other size options are also available" (http://www.designboom.com/technology/3d-printed-portrait-figurines-by-twinkind/?utm_campaign=daily&utm_medium=e-mail&utm_source=subscribers, accessed 08-11-2013). ## After Cell Phones With Cameras, Android Cameras- Without Cellphones- are Introduced December 19, 2012 Once cell phone cameras with their very limited lenses and image processors became the most popular means of taking photographs, mainly because cell phone images could immediately be emailed, posted to websites, social media, etc., it was probably inevitable that camera companies would introduce regular more full-featured cameras incorporating computers that could be connected to the Internet through Internet "hot spots" or cellular connections. The first models offered at the end of 2012 were full-featured and overpriced, but the concept appeared to have great potential: "New models from Nikon and Samsung are obvious graduates of the 'if you can’t beat ’em, join ’em' school. The Nikon Coolpix S800C ($300) and Samsung’s Galaxy Camera ($500 from AT&T,$550 from Verizon) are fascinating hybrids. They merge elements of the cellphone and the camera into something entirely new and — if these flawed 1.0 versions are any indication — very promising.

"From the back, you could mistake both of these cameras for Android phones. The big black multitouch screen is filled with app icons. Yes, app icons. These cameras can run Angry Birds, Flipboard, Instapaper, Pandora, Firefox, GPS navigation programs and so on. You download and run them exactly the same way. (That’s right, a GPS function. “What’s the address, honey? I’ll plug it into my camera.”) But the real reason you’d want an Android camera is wirelessness. Now you can take a real photo with a real camera — and post it or send it online instantly. You eliminate the whole 'get home and transfer it to the computer' step.

"And as long as your camera can get online, why stop there? These cameras also do a fine job of handling Web surfing, e-mail, YouTube videos, Facebook feeds and other online tasks. Well, as fine a job as a phone could do, anyway.

"You can even make Skype video calls, although you won’t be able to see your conversation partner; the lens has to be pointing toward you. Both cameras get online using Wi-Fi hot spots. The Samsung model can also get online over the cellular networks, just like a phone, so you can upload almost anywhere" (Pogue's Posts, NYTimes.com, 12-19-2012, accessed 12-21-2012).

## Making the iPhone 5 Look and Feel Like a Traditional Camera: the gizmon iCa case February 2013

After cell phones cameras became the most popular way of taking pictures, it was probably inevitable that a way would be found to make them look and act like cameras:

"now available for the iPhone 5, the 'gizmon iCa' polycarbonate case transforms your smartphone into a working rangefinder camera. a working shutter button is built into the top of the case - making it easy to capture images without having to pre-load the camera interface app. incorporated with a viewfinder on top of the enclosure - the design helps eliminate glare in direct sunlight, as with an additional lens opening from the flash unit. the case also ships with a second interchangeable section that allows for the fitting of any of the accessory lenses" (http://www.designboom.com/technology/the-gizmon-ica-5-case-for-the-iphone-5/, accessed 02-07-2013).

Gizmon, a division of ADPLUS Co. Ltd, Kumamoto-city, Kumamoto, Japan, also produced a series of ad-one lenses and filters for the iPhone that could be used without the iCA polycarbonate case.

## Software Turns a Smartphone into a 3D Scanner December 5, 2013

On December 5. 2013 scientists led by Marc Pollefeys, head of the Computer Vision and Geometry Group in the Institute of Visual Computing at ETH Zurich announced that they developed an app that turned an ordinary Android smartphone into a 3D scanner. Marc Pollefeys commented that two years ago software of this type would have been expected to run only on large computers. "That this works on a smartphone would have been unthinkable."

Rather than taking a regular photograph, a user moves the phone and its camera around the object being scanned, and after a few motions, a three dimensional model appears on the screen. As the user keeps moving the phone and its camera, additional images are recorded automatically, extending the wireframe of the virtual object. Because all calculations are programmed into the software, the user gets immediate feedback and can select additional viewpoints to cover missing parts of the rendering. The system utilizes the inertial sensors of the phone, extracting the camera views in real-time based on kinetic motion capture. The resulting 360 degree model can be used for visualization or augmented reality applications, or rapid prototyping with CNC (Computer Numerical Control) machines and 3D printers.

Because the app worked even in low light conditions, such as in museums and churches, it was suggested that a visitor in a museum could scan a sculpture and consider it later at home or at work.

In December 2013 a YouTube video showing how the 3D scanning app worked as well as examples of 3D printed objects made from cell phone scans were available at this link.

## A Neural Network that Reads Millions of Street Numbers January 1, 2014

To read millions of street numbers on buildings photographed for Google StreetView, Google built a neural network that developed reading accuracy comparable to humans assigned to the task. The company uses the images to read house numbers and match them to their geolocation, storing the geolocation of each building in its database. Having the street numbers matched to physical location on a map is always useful, but it is particularly useful in places where street numbers are otherwise unavailable, or in places such as Japan and South Korea, where streets are rarely numbered in chronological order, but in other ways, such as the order in which they were constructed— a system that makes many buildings impossibly hard to find, even for locals.

"Recognizing arbitrary multi-character text in unconstrained natural photographs is a hard problem. In this paper, we address an equally hard sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from Street View imagery. Traditional approaches to solve this problem typically separate out the localization, segmentation, and recognition steps. In this paper we propose a unified approach that integrates these three steps via the use of a deep convolutional neural network that operates directly on the image pixels. We employ the DistBelief implementation of deep neural networks in order to train large, distributed neural networks on high quality images. We find that the performance of this approach increases with the depth of the convolutional network, with the best performance occurring in the deepest architecture we trained, with eleven hidden layers. We evaluate this approach on the publicly available SVHN dataset and achieve over 96% accuracy in recognizing complete street numbers. We show that on a per-digit recognition task, we improve upon the state-of-the-art and achieve 97.84% accuracy. We also evaluate this approach on an even more challenging dataset generated from Street View imagery containing several tens of millions of street number annotations and achieve over 90% accuracy. Our evaluations further indicate that at specific operating thresholds, the performance of the proposed system is comparable to that of human operators. To date, our system has helped us extract close to 100 million physical street numbers from Street View imagery worldwide."

Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, Vinay Shet, "Multi-digit Number Recognition from Street ViewImagery using Deep Convolutional Neural Networks," arXiv:1312.6082v2.

## The First Project to Investigate the Use of Instagram During a Social Upheaval February 17 – February 22, 2014

On October 14, 2014 computer scientist and new media theorist Lev Manovich  of the The Graduate Center, City University of New York informed the Humanist Discussion Group of the project by his Software Studies Initiative entitled The Exceptional & The Everyday: 144 Hours in Kiev. This was the first project analyzing the use of Instagram images during a social upheaval using computational and data visualization techniques. The project explored 13,203 Instagram images shared by 6,165 people in the central area of Kiev, Ukraine during the 2014 Ukrainian revolution from February 17 to February 22, 2014. Collaborators on the project included Mehrdad Yazdani of the University of California, San Diego, Alise Tifentale, a PhD student in art history at The Graduate Center,City University of New York, and Jay Chow, a web developer in San Diego. The project seems to have been first publicized on the web by FastCompany and TheGuardian on October 8, 2014.

"CONTENTS:

Visualizations and Analysis: Visualizing the images and data and interpreting the patterns.

Context and Methods: Brief summary of the events in Kiev during February 17-22, 2014; our research methods.

Iconography of the Revolution: What are the popular visual themes in Instagram images of a revolution? (essay by Alise Tifentale).

The Infra-ordinary City: Representing the ordinary from literature to social media (essay by Lev Manovich).

The Essay: "Hashtag #Euromaidan: What Counts as Political Speech on Instagram?" (guest essay by Elizabeth Losh).

Constructing the dataset: Constructing the dataset for the project; data privacy issues.

References: Bibliography of relevant articles and projects.

PUBLICATION:

Lev Manovich, Alise Tifentale, Mehrdad Yazdani, and Jay Chow. "The Exceptional and the Everyday: 144 Hours in Kiev." The 2nd Workshop on Big Humanities Data held in conjunction with IEEE Big Data 2014 Conference, forthcoming 2014.

The Exceptional and the Everyday: 144 hours in Kiev continues previous work of our lab (Software Studies Initiative,softwarestudies.com) with visual social media: phototrails.net (analysis and visualization of 2.3 Instagram photos in 14 global cities, 2013; selfiecity.net (comparison between 3200 selfie photos shared in six cities, 2014; collaboration with Moritz Stefaner). In the new project we specifically focus on the content of images, as opposed to only their visual characteristics. We use computational analysis to locate typical Instagram compositions and manual analysis to identify the iconography of a revolution. We also explore non-visual data that accompanies the images: most frequent tags, the use of English, Ukrainian and Russian languages, dates and times when images their shared, and their geo-coordinates."

## Selfiecity.net. Analysis and Visualization of Thousands of Selfie Photos. . . . February 25, 2014

On February 25, 2014 I received this email from "new media" theorist Lev Manovich via the Humanist Discussion Group, announcing the launch of a cutting edge website analyzing the "Selfie" phenomenon:

"Date: Sat, 22 Feb 2014 21:00:30 +0000
From: Lev Manovich <manovich@softwarestudies.com>
Subject: Inntroducing selfiecity.net  - analysis and visualization of thousands of selfies photos from five global cities

"Welcome to Selfiecity!
http://selfiecity.net/

I'm excited to announce the launch of our new research project selfiecity.net. The website presents analysis and interactive visualizations of 3,200 Instagram selfie photos, taken between December 4 and 12, 2013, in Bangkok, Berlin, Moscow, New York, and São Paulo.

The project explores how people represent themselves using mobile photography in social media by analyzing the subjects’ demographics, poses, and expressions.

Selfiecity (http://softwarestudies.us2.list-manage.com/track/click?u=67ffe3671ec85d3bb8a9319ca&id=edb72af8ec&e=8a08a35e11) investigates selfies using a mix of theoretic, artistic and quantitative methods:

* Rich media visualizations in the Imageplots section assemble thousands of photos to reveal interesting patterns.
* An interactive component of the website, a custom-made app Selfiexploratory invites visitors to filter and explore the photos themselves.
* Theory and Reflection section of the website contribute to the discussion of the findings of the research. The authors of the essays are art historians Alise Tifentale (The City University of New York, The Graduate Center) and Nadav Hochman (University of Pittsburgh) as well as media theorist Elizabeth Losh (University of California, San Diego).

The project is led by Dr. Lev Manovich, leading expert on digital art and culture; Professor of Computer Science, The Graduate Center, CUNY; Director, Software Studies Initiative."

Considering the phenomenon that selfies had become, I was not surprised when two days later reference was made, also via the Humanist Discussion Group, to  "a very active Facebook group https://www.facebook.com/groups/664091916962292/ 'The Selfies Research Network'." When I looked at this page in February 2014 the group had 298 members, mostly from academia, but also including professionals in fields like social media, from many different countries.

## DeepFace, Facial Verification Software Developed at Facebook, Approaches Human Ability March 17, 2014

On March 17, 2014 MIT Technology Review published an article by Tim Simonite on Facebook's facial recognition software, DeepFace, which I quote:

"Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.

"That’s a significant advance over previous face-matching software, and it demonstrates the power of a new approach to artificial intelligence known as deep learning, which Facebook and its competitors have bet heavily on in the past year (see 'Deep Learning'). This area of AI involves software that uses networks of simulated neurons to learn to recognize patterns in large amounts of data.

"'You normally don’t see that sort of improvement,' says Yaniv Taigman, a member of Facebook’s AI team, a research group created last year to explore how deep learning might help the company (see 'Facebook Launches Advanced AI Effort'). 'We closely approach human performance,' says Taigman of the new software. He notes that the error rate has been reduced by more than a quarter relative to earlier software that can take on the same task.

"Facebook’s new software, known as DeepFace, performs what researchers call facial verification (it recognizes that two images show the same face), not facial recognition (putting a name to a face). But some of the underlying techniques could be applied to that problem, says Taigman, and might therefore improve Facebook’s accuracy at suggesting whom users should tag in a newly uploaded photo.

"However, DeepFace remains purely a research project for now. Facebook released a research paper on the project last week, and the researchers will present the work at the IEEE Conference on Computer Vision and Pattern Recognition in June. 'We are publishing our results to get feedback from the research community,' says Taigman, who developed DeepFace along with Facebook colleagues Ming Yang and Marc’Aurelio Ranzato and Tel Aviv University professor Lior Wolf.

"DeepFace processes images of faces in two steps. First it corrects the angle of a face so that the person in the picture faces forward, using a 3-D model of an 'average' forward-looking face. Then the deep learning comes in as a simulated neural network works out a numerical description of the reoriented face. If DeepFace comes up with similar enough descriptions from two different images, it decides they must show the same face.

"The performance of the final software was tested against a standard data set that researchers use to benchmark face-processing software, which has also been used to measure how humans fare at matching faces.

"Neeraj Kumar, a researcher at the University of Washington who has worked on face verification and recognition, says that Facebook’s results show how finding enough data to feed into a large neural network can allow for significant improvements in machine-learning software. 'I’d bet that a lot of the gain here comes from what deep learning generally provides: being able to leverage huge amounts of outside data in a much higher-capacity learning model,' he says.

"The deep-learning part of DeepFace consists of nine layers of simple simulated neurons, with more than 120 million connections between them. To train that network, Facebook’s researchers tapped a tiny slice of data from their company’s hoard of user images—four million photos of faces belonging to almost 4,000 people. 'Since they have access to lots of data of this form, they can successfully train a high-capacity model,' says Kumar.

## Indexing and Sharing 2.6 Million Images from eBooks in the Internet Archive August 29, 2014

On August 29, 2014 the Internet Archive announced that data mining and visualization expert Kalev Leetaru, Yahoo Fellow at Georgetown University, extracted over 14 million images from two million Internet Archive public domain eBooks spanning over 500 years of content. Of the 14 million images, 2.6 million were uploaded to Flickr, the image-sharing site owned by Yahoo, with a plan to upload more in the near future.

Also on August 29, 2014 BBC.com carried a story entitled "Millions of historic images posted to Flickr," by Leo Kelion, Technology desk editor, from which I quote:

"Mr Leetaru said digitisation projects had so far focused on words and ignored pictures.

" 'For all these years all the libraries have been digitising their books, but they have been putting them up as PDFs or text searchable works,' he told the BBC.

"They have been focusing on the books as a collection of words. This inverts that. . . .

"To achieve his goal, Mr Leetaru wrote his own software to work around the way the books had originally been digitised.

"The Internet Archive had used an optical character recognition (OCR) program to analyse each of its 600 million scanned pages in order to convert the image of each word into searchable text.

"As part of the process, the software recognised which parts of a page were pictures in order to discard them.

"Mr Leetaru's code used this information to go back to the original scans, extract the regions the OCR program had ignored, and then save each one as a separate file in the Jpeg picture format.

"The software also copied the caption for each image and the text from the paragraphs immediately preceding and following it in the book.

"Each Jpeg and its associated text was then posted to a new Flickr page, allowing the public to hunt through the vast catalogue using the site's search tool. . . ."

## Google Develops A Neural Image Caption Generator to Translate Images into Words November 17, 2014

Having previously transformed the machine translation process by developing algorithms from vector space mathematics, in November 2014 Oriol Vinyals and colleagues at Google in Mountain View developed a neural image caption generator to translate images into words. Google's machine translation approach is:

"essentially to count how often words appear next to, or close to, other words and then define them in an abstract vector space in relation to each other. This allows every word to be represented by a vector in this space and sentences to be represented by combinations of vectors.

"Google goes on to make an important assumption. This is that specific words have the same relationship to each other regardless of the language. For example, the vector “king - man + woman = queen” should hold true in all languages. . . .

"Now Oriol Vinyals and pals at Google are using a similar approach to translate images into words. Their technique is to use a neural network to study a dataset of 100,000 images and their captions and so learn how to classify the content of images.

"But instead of producing a set of words that describe the image, their algorithm produces a vector that represents the relationship between the words. This vector can then be plugged into Google’s existing translation algorithm to produce a caption in English, or indeed in any other language. In effect, Google’s machine learning approach has learnt to “translate” images into words.

To test the efficacy of this approach, they used human evaluators recruited from Amazon’s Mechanical Turk to rate captions generated automatically in this way along with those generated by other automated approaches and by humans.

"The results show that the new system, which Google calls Neural Image Caption, fares well. Using a well known dataset of images called PASCAL, Neural image Capture clearly outperformed other automated approaches. “NIC yielded a BLEU score of 59, to be compared to the current state-of-the-art of 25, while human performance reaches 69,” says Vinyals and co" (http://www.technologyreview.com/view/532886/how-google-translates-pictures-into-words-using-vector-space-mathematics/, accessed 01-14-2015).

Vinyals et al, "Show and Tell: A Neural Image Captional Generator" (2014) http://arxiv.org/pdf/1411.4555v1.pdf

"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In thispaper we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used
to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify
both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU score improvements on Flickr30k, from 55 to 66, and on SBU, from 19 to 27" (Abstract).

## A Machine Vision Algorithm Learns to Attribute Paintings to Specific Artists May 2015

In May 2015 Babak Saleh and Ahmed Elgammal of the Department of Compuer Science, Rutgers University, described an algorithm that could recognize the Style, Genre, and Artist of a painting.

"Saleh and Elgammal begin with a database of images of more than 80,000 paintings by more than a 1,000 artists spanning 15 centuries. These paintings cover 27 different styles, each with more than 1,500 examples. The researchers also classify the works by genre, such as interior, cityscape, landscape, and so on.

"They then take a subset of the images and use them to train various kinds of state-of-the-art machine-learning algorithms to pick out certain features. These include general, low-level features such as the overall color, as well as more advanced features that describe the objects in the image, such as a horse and a cross. The end result is a vector-like description of each painting that contains 400 different dimensions.

"The researchers then test the algorithm on a set of paintings it has not yet seen. And the results are impressive. Their new approach can accurately identify the artist in over 60 percent of the paintings it sees and identify the style in 45 percent of them.

"But crucially, the machine-learning approach provides an insight into the nature of fine art that is otherwise hard even for humans to develop. This comes from analyzing the paintings that the algorithm finds difficult to classify.

"For example, Saleh and Elgammal say their new approach finds it hard to distinguish between works painted by Camille Pissarro and Claude Monet. But a little research on these artists quickly reveals both were active in France in the late 19th and early 20th centuries and that both attended the Académie Suisse in Paris. An expert might also know that Pissarro and Monet were good friends and shared many experiences that informed their art. So the fact that their work is similar is no surprise.

"As another example, the new approach confuses works by Claude Monet and the American impressionist Childe Hassam, who, it turns out, was strongly influenced by the French impressionists and Monet in particular.  These are links that might take a human some time to discover" (MIT Technology Review May 11, 2015).

Saleh, Babak, and Elgammal, Ahmed," Large-scale Classification of Fine-Art Paintings; Learning the Right Metric on the Right Feature" (http://arxiv.org/pdf/1505.00855v1.pdf, 5 May 2015.