4406 entries. 94 themes. Last updated December 26, 2016.

Imaging / Photography / Computer Vision Timeline

Theme

1000 – 1100

Foundation of Experimental Physics, Optics, and the Science of Vision 1011 – 1021

A portrait of Ibn al_Haytham, once printed on the obverse side of an Iraqi 10-dinar bill.

Under house arrest in Cairo, Egypt, between 1011 and 1021, Iraqi Muslim scientist Ibn al-Haytham (Latinized as Alhacen or Alhazen) wrote The Book of Optics (Arabic: Kitab al-Manazir‎; Latin: De aspectibus or Opticae Thesaurus: Alhazeni Arabis,)  a seven-volume treatise on optics, physics, mathematics, anatomy and psychology.

"The book had an important influence on the development of optics, as it laid the foundations for modern physical optics after drastically transforming the way in which light and vision had been understood, and on science in general with its introduction of the experimental scientific method. Ibn al-Haytham has been called the "father of modern optics", the 'pioneer of the modern scientific method,' and the founder of experimental physics, and for these reasons he has been described as the 'first scientist.'

"The Book of Optics has been ranked alongside Isaac Newton's Philosophiae Naturalis Principia Mathematica as one of the most influential books in the history of physics, as it is widely considered to have initiated a revolution in the fields of optics and visual perception. It established experimentation as the norm of proof in optics, and gave optics a physico-mathematical conception at a much earlier date than the other mathematical disciplines of astronomy and mechanics.

"The Book of Optics also contains the earliest discussions and descriptions of the psychology of visual perception and optical illusions, as well as experimental psychology, and the first accurate descriptions of the camera obscura, a precursor to the modern camera. In medicine and ophthalmology, the book also made important advances in eye surgery, as it correctly explained the process of sight for the first time" (Wikipedia article on Book of Optics, accessed 04-23-2009).

Translated into Latin by an unknown scholar at the end of the 12th century or the beginning of the 13th, Alhazen's Book of Optics enjoyed great reputation and circulated by manuscript copying to the few who could understand it during the Middle Ages. It was first edited for print publication by the German mathematician Friedrich Risner and issued  as Opticae thesaurus. . . libri septem, nunc primum editi . . . item Vitellonis Thuringopoloni libri X in Basel by Episcopus in 1572.

Hook & Norman, The Haskell F. Norman Library of Science and Medicine (1991) no. 1027.

View Map + Bookmark Entry

Construction of the First Camera Obscura 1012 – 1021

A Qatarian postage stamp portraying Ibn al-Haitham. (View Larger)  <p>Persian scientist Abu Ali Al-Hasan <a href=,

Abū ʿAlī al-Ḥasan ibn al-Ḥasan ibn al-Haytham  (أبو علي، الحسن بن الحسن بن الهيثم‎), frequently referred to as Ibn al-Haytham (Arabic: ابن الهيثم, known in the west as Alhazen, built the first camera obscura or pinhole camera—significant in the history of optics, photography, and the history of art.

In his Book of Optics, written in Cairo between 1012 and 1021, Ibn al-Haytham used the term “Al-Bayt al-Muthlim", translated into English as "dark room."

"In the experiment he undertook, in order to establish that light travels in time and with speed, he says: 'If the hole was covered with a curtain and the curtain was taken off, the light traveling from the hole to the opposite wall will consume time.' He reiterated the same experience when he established that light travels in straight lines. A revealing experiment introduced the camera obscura in studies of the half-moon shape of the sun's image during eclipses which he observed on the wall opposite a small hole made in the window shutters. In his famous essay 'On the form of the Eclipse' (Maqalah-fi-Surat-al-Kosuf) he commented on his observation 'The image of the sun at the time of the eclipse, unless it is total, demonstrates that when its light passes through a narrow, round hole and is cast on a plane opposite to the hole it takes on the form of a moon-sickle'.

"In his experiment of the sun light he extended his observation of the penetration of light through the pinhole to conclude that when the sun light reaches and penetrates the hole it makes a conic shape at the points meeting at the pinhole, forming later another conic shape reverse to the first one on the opposite wall in the dark room. This happens when sun light diverges from point “ﺍ” until it reaches an aperture and is projected through it onto a screen at the luminous spot. Since the distance between the aperture and the screen is insignificant in comparison to the distance between the aperture and the sun, the divergence of sunlight after going through the aperture should be insignificant. In other words, should be about equal to. However, it is observed to be much greater when the paths of the rays which form the extremities of are retraced in the reverse direction, it is found that they meet at a point outside the aperture and then diverge again toward the sun as illustrated in figure 1. This an early accurate description of the Camera Obscura phenomenon."

"In 13th-century England Roger Bacon described the use of a camera obscura for the safe observation of solar eclipses. Its potential as a drawing aid may have been familiar to artists by as early as the 15th century; Leonardo da Vinci (1452-1519 AD) described camera obscura in Codex Atlanticus. . . .

"The Dutch Masters, such as Johannes Vermeer, who were hired as painters in the 17th century, were known for their magnificent attention to detail. It has been widely speculated that they made use of such a camera, but the extent of their use by artists at this period remains a matter of considerable controversy, recently revived by the Hockney-Falco thesis. The term "camera obscura" was first used by the German astronomer Johannes Kepler in 1604.

"Early models were large; comprising either a whole darkened room or a tent (as employed by Johannes Kepler). By the 18th century, following developments by Robert Boyle and Robert Hooke, more easily portable models became available. These were extensively used by amateur artists while on their travels, but they were also employed by professionals, including Paul Sandby, Canaletto and Joshua Reynolds, whose camera (disguised as a book) is now in the Science Museum (London). Such cameras were later adapted by Louis Daguerre and William Fox Talbot for creating the first photographs" (Wikipedia article on Camera obscura, accessed 04-24-2009).

View Map + Bookmark Entry

1550 – 1600

The Codex Selden/ Codex Añute, a Precolonial Mexican Palimpsest Circa 1560

The Codex Selden, also called the Codex Añute, a Mixtec screenfold manuscript preserved in the Bodleian Library, Oxford, was acquired by the Bodleian in the 17th century from the estate of jurist, legal antiquary and orientalist John Selden. It is one of less than twenty precolonial Mesoamerican codices that survived the conquest of the Americas, containing information on the history of ancient cities, prescriptions on rituals and calendrical divination. Of those codices, the Codex Selden/Añute is the only palimpsest, as its currently viewable content was written on a white paint layer that covers an earlier pictographic document.

In 2013-2014 the Bodleian's Ancient Mexican Manuscripts project undertook the recovery of these hidden pictorial texts. Results were expected to be published in the summer of 2016:

"The use of exclusively organic paints to create these images presented a unique set of challenges necessitating the development of a new imaging technique. During the present intervention this new technique called Photothermal Tomography is combined with a number of other techniques such as high-resolution photography, infrared photography, and RTI imaging to gain a better insight into this important palimpsest"( http://www.bodleian.ox.ac.uk/whats-on/upcoming-events/2015/mar/precolonial-mexican-manuscript, accessed 03-18-2015).

In August 2016 the Oxford Mail reported the following:

" "After four or five years of trying different techniques, we’ve been able to reveal an abundance of images without damaging this extremely vulnerable item,' said Ludo Snijders from Leiden University, who conducted the research with David Howell from the Bodleian Libraries and Tim Zaman from the University of Delft.,,,

"Mr Snijders said: 'What’s interesting is that the text we’ve found doesn’t match that of other early Mixtec manuscripts. The genealogy we see appears to be unique, which means it may prove invaluable for the interpretation of archaeological remains from southern Mexico.'

"Some pages feature more than 20 characters sitting or standing in the same direction. Similar scenes have been found on other Mixtec manuscripts, representing a King and his council.

"The researchers analysed seven pages of the codex for this study and revealed other images including people walking with sticks and spears, women with red hair or headdresses and place signs containing the glyphs for rivers.

"The paints used to crate the vibrant images are organic and do not absorb X-rays, meaning traditional methods could not be used in trying to get a glimpse of the codex's fascinating stories.

"Working with the humanities division in the University of Oxford, the Bodleian acquired a hyperspectral scanner in 2014 with the support of the university’s Fell Fund – and the equipment was able to unmask the past.

"David Howell, head of heritage science at the Bodleian Libraries, said: 'This is very much a new technique, and we’ve learned valuable lessons about how to use hyperspectral imaging in the future both for this very fragile manuscript and for countless others like it.' " (http://www.oxfordmail.co.uk/news/14701472.Bodleian_boffins_uncover_images_of_rare_Mexican_manuscript_hidden_for_almost_500_years/, accessed 09-03-2016).

Researchers are continuing to analyse the remainder of the document with the aim of reconstructing the entire hidden imagery, allowing the text to be interpreted more fully.

The Codex Selden/Añute was first published by Edward King, Viscount Kingsborough in his ten volume series, Antiquities of Mexico (1831-1848). 

Regarding the history of the codex I quote from John Pohl's Mesoamerica:

"John Selden died in 1654 but the last date associated with the genealogy in the manuscript is the Mixtec year 11 Flint which corresponds to A.D. 1556. A date on the cover of the manuscript (2 Flint) may correspond to 1560 (M.E. Smith 1994:122-123). How the codex got from the Mixteca-Alta, Oaxaca, into the hands of Selden remains a mystery. Smith thinks that Codex Selden was composed by the community of Jaltepec, located in the southern Nochixtlán Valley for presentation to Spanish and Indian authorities with regard to a dispute over a subject town.

The town in question was called Zahuatlán and it is represented in the codex as a hill sign qualified by a man dancing - to signify Zahuatlán’s Mixtec name "yucu nicata" or "Hill that Danced". Both Jaltepec and Yanhuitlán, a principal rival in the the northern Nochixtlán Valley, claimed the town. Lords and Ladies of Zahuatlán appear in the codex either paying homage, intermarrying, or being subjugated by Jaltepec. Since the painting of the codex was assuredly commissioned by Jaltepec, a better name for the manuscript is Codex Añute, Jaltepec’s Mixtec name."

(This entry was last revised on 09-02-2016).

View Map + Bookmark Entry

1600 – 1650

Hans Lippershey Invents the Telescope 1608

In 1608 German-Dutch lensmaker of Middelberg, Netherlands, Hans Lippershey created and disseminated designs for the first practical telescope.

"Crude telescopes and spyglasses may have been created much earlier, but Lippershey is believed to be the first to apply for a patent for his design (beating Jacob Metius by a few weeks), and making it available for general use in 1608. He failed to receive a patent but was handsomely rewarded by the Dutch government for copies of his design. The 'Dutch perspective glass', the telescope that Lippershey invented, could only magnify thrice.

"The first known mention of Lippershey's application for a patent for his invention appeared at the end of a diplomatic report on an embassy to Holland from the Kingdom of Siam sent by the Siamese king Ekathotsarot: Ambassades du Roy de Siam envoyé à l'Excellence du Prince Maurice, arrive a La Haye, le 10. septembr. 1608 ('Embassy of the King of Siam sent to his Excellence Prince Maurice, September 10, 1608'). The diplomatic report was soon distributed across Europe, leading to the experiments by other scientists such as the Italian Paolo Sarpi, who received the report in November, or the English Thomas Harriot in 1609, and Galileo Galilei who soon improved the device.

"One story behind the creation of the telescope states that two children were playing with lenses in his shop. The children discovered that images were clearer when seen through two lenses, one in front of the other. Lippershey was inspired by this and created a device very similar to today's telescope" (Wikipedia article on Hans Lippershey, accessed 03-27-2009).

While Sarpi and Harriot experimented with Lippershey's telescope prior or contemporaneously with Galileo, neither wrote or published on the subject.

(This entry was last revised on April 14, 2014.)

View Map + Bookmark Entry

Galileo Issues Images of Revolutionary Discoveries Concerning the Universe; and the Story of a Remarkable Forgery November 1609 – March 13, 1610

After learning in 1609 that a Dutchman, Hans Lippershey, had invented an instrument that made faraway objects appear closer, Italian scientist Galileo Galilei, a resident of Padua, applied himself to discovering the principle behind this instrument. By late in 1609 he built a telescope of about thirty power. This he probably first turned to the heavens in November or December 1609, with astronishing and revolutionary results. In contradiction to the doctrines of Aristotle and Ptolemy, which taught that the celestrial sphere and its planets and stars were perfect and unchanging, Galileo's telescope showed that the surface of the moon was rough and mountainous, and the Milky way was composed of thickly clustered stars. In November or December 1609 Galileo painted six watercolors on a notebook page showing the phases of the moon, as he observed them through the telescope. These images, on a sheet preserved in Florence, at the Biblioteca Nazionale Centrale (Ms. Gal. 48, f. 28r), were the first realistic images of the moon, and the first recorded images of bodies beyond the earth seen by man. 

On the night of January 7, 2010 Galileo set up a telescope on his balcony in Padua. He spotted three stars near Jupiter, and noted their positions in a notebook. Six days later Galileo returned to his telescope and found the same stars, but by then their position had changed. At that point he realized that the three stars were moons orbiting Jupiter— proof that the universe of stars was not fixed, as postulated by Ptolemy's geocentric theory, and evidence for Copernicanism. Three months later Galileo's Sidereus Nuncius, or Starry Messenger, was published in Venice in an edition of 550 copies. The Sidereus Nuncius described and illustrated with copperplate engravings the first astronomical observations made through a telescope. Its images provided revolutionary new information about the universe. Though it contained only the bare facts of Galileo's observations without any overt reference to the Copernican theory, Sidereus Nuncius aroused a sensation among the European learned community, for it provided the first hard evidence that the Aristotelian-Ptolemaic view of the universe contained inaccuracies. 

"He sent a copy of the book, along with the telescope he had been using, to the Grand Duke of Tuscany Cosimo II de’ Medici. Dr. [Owen] Gingerich said the pamphlet amounted to 'a job application' to the Medici family for whom, in one of history’s first examples of branding, Galileo named the four satellites of Jupiter. 'Other planets were gods or goddesses,' said Paolo Galluzzi, director of the Florence institute. 'The only humans with position in sky were Medicis.' The ploy worked, Cosimo II hired Galileo as his astronomer, elevating him from a poorly paid professor at the University of Padua to a celebrity, making the equivalent of $300,000, a year, Dr. Galluzzi said. Galileo returned the favor by giving Cosimo another telescope, clad in red leather and stamped with decorations" (Dennis Overbye, "A Telescope to the Past as Galileo Visits the U.S.", The New York Times, March 27, 2009.)

It is thought that Galileo built dozens of telescopes, of which two survive, both in the Institute for the History of Science (Museo Galileo) in Florence, Italy. One covered in decorated leather, which Galileo sent to Grand Duke Cosimo II de' Medici, retains only one of its original lenses, but the other, covered only in varnished paper, contains its original functioning optics, and has its focal length labeled in Galileo's handwriting on the outside of its tube. This telescope was loaned to the Franklin Institute in Philadelphia for an exhibition from April to September 2009. (The online article in The New York Times included a video showing the original telescope being unpacked in Philadelphia.)

________

In June 2005 antiquarian bookseller Richard Lan (Martayan-Lan, Inc.) purchased a copy of the Sidereus nuncius from Marino Massimo De Caro and antiquarian bookseller Filippo Rotundo that was represented as a proof copy, signed by Galileo, originally from the library of Federico Cesi, founder of the Accademia dei Lincei. Instead of copperplate engraved illustrations as in other copies of the book, this copy contained watercolors of the phases of the moon similar to those which Galileo made at the end of 1609 and which are preserved in Florence. It was known that the Venetian printer had sent Galileo thirty copies with blank spaces indicating where etchings would be placed. Presumably this was one of those copies, in which Galileo had personally painted images for presentation to Federico Cesi, instead of having engravings printed in. The copy was examined by all the leading authorities, subjected to various tests, and was generally considered a unique proof copy.

The Martayan Lan copy was included in the discussions in a symposium convened at the Library of Congress in November 2010 entitled "Galileo's Moons," intended to celebrate the 400th anniversary of the Sidereus Nuncius and the acquisition by the Library of Congress of an uncut copy of the first edition bound in the original limp paper boards. Papers presented at this symposium accepted the authenticity of the Martayan Lan copy.

In 2011 De Gruyter published a rather grand 2-volume set, fully illustrated in color, based on research begun in 2007. Volume one, edited by Irene Brückle and Oliver Hahn, was entitled Galileo's Sidereus Nuncius. A comparison of the proof copy (New York) with other paradigmatic copies. Volume two, written by Paul Needham, was entitled Galileo Makes a Book. The First Edition of Sidereus Nuncius, Venice 1610. Regarding the significance of Needham's study, I quote from the review by G. Thomas Tanselle, Common Knowledge19, #3, (Fall 2013), 575-576:

"Needham’s book is based on eighty-three other copies, and he draws as well on Galileo’s letters, drafts, and various external documents. The result is a detailed account of the early months of 1610, from January 15, when Galileo decided he must publish his discoveries, to March 13, when the printing was completed; an additional chapter discusses the book’s distribution and Galileo’s corrections in some copies. The task of bibliography, as stated by Needham, is to know “the materials and human actions that produced (in multiple copies) the structure of a printed book.” Systematically he takes up the paper, type, and format of Sidereus Nuncius and provides a quire-by-quire analysis of its production, making exemplary use of many techniques of bibliographical analysis, each patiently and clearly explained, with accompanying illustrations. The book could serve as an excellent introduction to this kind of work; but even more remarkably, it demonstrates how interconnected are the physical object and its intellectual content. The title sentence, “Galileo makes a book,” has a double meaning: not only did Galileo write the text, but he also attended to its physical production, making the presentation of the text integral to its meaning. Needham does not neglect Galileo’s writing itself: he calls Galileo “an artist with words,” whose “prose embodies not just close reasoning, but also life and emotion.”

"This assessment applies equally to Needham’s own writing, which combines rigorous but readable technical analysis with an awareness of the human side of that work and the story it reveals. This combination recalls an earlier bibliographical classic, Allan Stevenson’s The Problem of the Missale Speciale (1967), another full-length treatment of a single book. Even the sense of humor displayed by Stevenson has its counterpart here: when, for example, Needham explains two hypotheses as to when the printing of Galileo’s book began, he calls the one that postulates a later date “the dilatory view.” At the end Needham praises the many nameless actors, such as papermakers and printing-shop workers, who played roles in the story; and he closes with “the mules and oxen whose humble labor moved sheets of Sidereus Nuncius across the face of Europe, under the eyes of the boundless sky.” This passage, occurring in a work of bibliographical analysis, epitomizes the work’s unusual accomplishment: it breaks new ground in the study of a major book, sets forth its discoveries in an engaging narrative, and in the process shows how bibliography can be essential to intellectual history."

Until early 2012 Richard Lan was privately offering the copy for sale for $10,000,000. Then Nick Wilding, an historian of science at Georgia State University who had been asked to review the 2-volume set mentioned above, presented concrete proof that the Martayan-Lan copy was a forgery:

  • The book bears a library stamp by the founder of the Accademia dei Lincei Federico Cesi. But the stamp in the Martayan Lan copy doesn’t match those in other books with Cesi's stamp.
  • The title page was different from genuine copies, but bore similarities to a 1964 facsimile and an unsold Sotheby’s auction copy.
  • There was no record of the Siderus Nuncius in the original library from which this copy was thought to come.

Slowly the thread of fabrication began to unravel. Discovery of the forgery coincided with the exposure of massive thefts of rare books from the Girolomini Library in Naples, for which Marino Massimo De Caro, and others were eventually convicted. In 2013 the Library of Congress and Levenger Press issued Galileo Galilei, The Starry Messenger, Venice, 1610. From Doubt to Astonishment. This volume contained a facsimile edition of the Library of Congress copy, an English translation, and the text of the papers delivered at the November 2010 symposium. However, as the editor of the volume noted, Paul Needham revised his paper (now retitled "Authenticity and Facsimile: Gaileo's Paper Trail") in light of his later acceptance that the Martayan Lan copy was a forgery. On December 16, 2013 The New Yorker magazine published a detailed background article on the forgery and how it was accomplished, by Nicholas Schmidel: "A Very Rare Book. The mystery surrounding a copy of Gaileo's pivotal treatise." While the article filled in many blanks concerning the Sidereus Nuncius forgery, it raised other questions concerning other unknown thefts and forgeries by Marino Massimo de Caro and his associates.

In February 2014 De Gruyter issued an originally unintended volume three of their 2011 two-volume set entitled A Galileo Forgery. Unmasking the New York Sidereus Nuncius, edited by Horst Bredekamp, Irëne Bruckel, and Paul Needham. When I last revised this entry in August 2014 the full text of the volume was available as an Open Access PDF at no charge. This was the most comprehensive account and proof of the forgery. In many ways it was the most remarkable and admirable volume of the set, in which the scholars, recounted how the forgery was discovered, drew their final conclusions proving the forgery, and explained how they had been deceived in the first place.

Hook & Norman, The Haskell F. Norman Library of Science and Medicine (1991) no. 855.

(This entry was last revised on 04-04-2015.)

View Map + Bookmark Entry

1650 – 1700

Robert Hooke's Graphic Portrayal of the Hitherto Unknown Microcosm 1665

In 1665 Robert Hooke published Micrographia: Or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses in London. This was the first book devoted entirely to microscopical observations, and also the first book to pair its microscopic descriptions with profuse and detailed illustrations. This graphic portrayal of the hitherto unknown microcosm had an impact rivalling that of Galileo's Sidereus nuncius (1610), which was the first book to include images of the macrocosm shown through the telescope. It was also the second book published under the auspices of the Royal Society of London.

Hooke began his observations with studies of non-living materials, such as woven cloth and frozen urine crystals, then proceeded to investigations of plant and animal life.  He published the first studies of insect anatomy, giving a lucid account of the compound eye of the fly, and illustrating the microscopic details of such structures as apian wings, flies' legs and feet, and the sting of the bee.  His famous and dramatic portraits of the flea and louse, a frightening eighteen inches long, are hardly less startling today than they must have been to Hooke's contemporaries.  His botanical observations include the first description of the plant-like form of molds, and of the honeycomb-like structure of cork, which last he described as being composed of "cellulae"— thereby coining the modern biological usage of the work "cell" to describe the basic microscopic units of tissue.

In January 2014 a digital facsimile of the first edition of Hooke's Micrographia was available from the National Library of Medicine's website at this link.

Hook & Norman, The Haskell F. Norman Library of Science and Medicine (1991) no. 1092.

View Map + Bookmark Entry

1750 – 1800

Georg Christoph Lichtenberg Describes "Lichtenberg Figures" 1777

German scientist, satirist and Anglophile Georg Christoph Lichtenberg discovered Lichtenberg figures, and described them in his memoir "Super nova methodo motum ac naturam fluidi electrici" investigandi," Göttinger Novi Commentarii, Göttingen, 1777.

"In 1777, Lichtenberg built a large electrophorus to generate high voltage static electricity through induction. After discharging a high voltage point to the surface of an insulator, he recorded the resulting radial patterns in fixed dust. By then pressing blank sheets of paper onto these patterns, Lichtenberg was able to transfer and record these images, thereby discovering the basic principle of modern Xerography. This discovery was also the forerunner of modern day plasma physics. Although Lichtenberg only studied 2-dimensional (2D) figures, modern high voltage researchers study 2D and 3D figures (electrical trees) on, and within, insulating materials. Lichtenberg figures are now known to be examples of fractals" (Wikipedia article on Lichtenberg figures, accessed 06-11-2010).

View Map + Bookmark Entry

1800 – 1850

The Earliest Surviving Photograph: A Process that Never "Caught On" 1826 – 1827

In 1826 or 1827 French inventor Nicéphore Niépce created View from the Window at Le Gras, the oldest surviving photograph, using the process of heliography that he had invented around 1822. The photograph shows parts of the buildings and surrounding countryside of his estate, Le Gras, in Saint-Loup-de-Varennes, as seen from a high window. The exposure is thought to have required from eight hours to several days.

"Niépce captured the scene with a camera obscura focused onto a 16.2 cm × 20.2 cm (6.4 in × 8.0 in) pewter plate coated with Bitumen of Judea, a naturally occurring asphalt. The bitumen hardened in the brightly lit areas, but in the dimly lit areas it remained soluble and could be washed away with a mixture of oil of lavender and white petroleum. A very long exposure in the camera was required. Sunlight strikes the buildings on opposite sides, suggesting an exposure that lasted about eight hours, which has become the traditional estimate. A researcher who studied Niépce's notes and recreated his processes found that the exposure must have continued for several days.

"In late 1827, Niépce visited England. He showed this and several other specimens of his work to botanical illustrator Francis Bauer, who encouraged him to present his "heliography" process to the Royal Society. Niépce was unwilling to reveal any specific practical details of his process, so the Royal Society declined his offer. Before returning to France, he gave Bauer the specimens and a draft of the remarks he had prepared to accompany his presentation. After Bauer's death in 1840, the specimens passed through several hands and were occasionally exhibited as historical curiosities. The View from the Window at Le Gras was last seen in 1905 and then fell into oblivion.

"Historian Helmut Gernsheim tracked down the photograph in 1952 and brought it to prominence, reinforcing the claim that Niépce is the inventor of photography. He had an expert at the Kodak Research Laboratory make a modern photographic copy, but it proved extremely difficult to produce an adequate representation of all that could be seen when inspecting the actual plate. Gernsheim heavily retouched one of the copy prints to clean it up and make the scene more comprehensible, and until the late 1970s he allowed only that enhanced version to be published. It was apparently at the time of the copying that the plate acquired disfiguring bumps near three of its corners, causing light to reflect in ways that interfere with the visibility of those areas and of the image as a whole.

"In 1963, Harry Ransom purchased most of Gernsheim's photography collection for The University of Texas at Austin, but the Niépce heliograph was not included in the sale. Shortly thereafter, Gernsheim donated it. Although it has rarely traveled since then, in 2012–2013 it visited Mannheim, Germany as part of an exhibition entitled The Birth of Photography—Highlights of the Helmut Gernsheim Collection. It is normally on display in the main lobby of the Harry Ransom Humanities Research Center in Austin, Texas " (Wikipedia article on View from the Window at Le Gras, accessed 10-24-2013).

Why then did Niépce's process never catch on? Why is the invention of photography typically credited to Louis Daguerre and William Henry Fox Talbot?  Clearly the extremely slow speed of developing the image had to be a factor. According to an email I received from historian of science William B. Ashworth, Jr. on March 7, 2014, there were other reasons:

"It is a convoluted, and sad, story.  Niépce travelled to England in 1827 to tend to his mentally ill brother, and he brought several heliographs with him.  He met people who were quite interested in his process, and he tried to make arrangements to give a demonstration to the Royal Society of London.  However, everything went wrong, and it really was no one's fault.  The Royal Society was practically dysfunctional at the time, as the president, Humphry Davy, was dying, and there was considerable scrambling to determine his successor.  John Herschel, who would be a photographic pioneer himself in the 1830s, was so disgusted with the Society that he resigned his position as secretary and refused to attend meetings.  The upshot was that the presentation never came to pass, and the people who would have been the most interested in Niépce’s demonstration, like Herschel, never met Niépce or saw his work.  Niépce returned home, his heliotypes still in his luggage, and although he lived until 1833, and collaborated at the end with Louis Daguerre, he gradually disappeared from public view.  When the Daguerrotype (a different type of photographic process) was first demonstrated to a revitalized Royal Society in 1839, Niépce's name was all but forgotten.  Niépce did all the right things, but he never reached the right people.  Had he made his trip to England a year earlier, or even a year later, he might have found a receptive audience, and the history of photography might have played out quite differently.  Life is like that, sometimes."

View Map + Bookmark Entry

Daguerreotypes: The First Commonly Used Photographic Process January 7 – August 19, 1839

On January 7, 1839 members of the Académie des Sciences first viewed examples of Daguerréotypes invented by the painter and printmaker, Louis-Jacques Daguerre.

On July 3, 1839 French  mathematician, physicist, astronomer and politician François Jean Dominique Arago made the first brief scientific announcement and explanation of Daguerre's process to the Chambre des députés. This he repeated to the Académie des sciences on August 19. Arago's report was published in the Comptes rendus IX (1839) 250-67.

Later in 1839 Daguerre published in Paris his first account of the process in a pamphlet called Historique et description des procédés du Daguerréotype et du diorama. Daguerre's method of fixing an image on a metal plate became the first commonly used photographic process. It produced a single positive image on a highly polished silver-plated sheet of copper.

View Map + Bookmark Entry

The First Separate Publication on Photography January 31, 1839

Upon learning about the exhibition of Daguerréotypes at the Académie des Sciences on January 7, 1839, English inventor William Henry Fox Talbot hastily read a paper on January 31 to the Royal Society entitled Some Account of the Art of Photogenic Drawing, or the Process by which Natural Objects may be made to Delineate Themselves with the Aid of the Artist's Pencil.

This paper, which Talbot had printed and distributed to friends as a pamphlet in February, 1839, was the first separate publication on photography.  In it Talbot suggested that fixed negatives might be used to produce multiple positive images.

In 1835 Talbot had developed a method of fixing negative images on paper previously made light-sensitive by successive coats of sodium chloride and silver nitrate, thus becoming the first to produce permanent paper negatives. 

Gernsheim, The History of Photography (1969) Ch. 7, Gernsheim, Incunabula of British Photographic Literature (1984) no. 646. Hook & Norman, The Haskell F. Norman Library of Science and Medicine (1991) no. 2049.

View Map + Bookmark Entry

Perhaps the First "Selfie" Photograph Circa October 1839

In February 2014 a daguerreotype self-portrait taken by the American photography pioneer Robert Cornelius of Philadelphia was considered the first American photographic portrait of a human ever produced, and since this was a self-portrait, it was also possibly the first "selfie."

The daguerreotype is preserved in the Library of Congress, which produced this description:

"Daguerre announced his invention of a photographic method to the French Academy of Sciences in August 1839. That October, a young Philadelphian, Robert Cornelius, working out of doors to take advantage of the light, made this head-and-shoulders self-portrait using a box fitted with a lens from an opera glass. In the portrait, Cornelius stands slightly off-center with hair askew, in the yard behind his family's lamp and chandelier store, peering uncertainly into the camera. Early daguerreotypy required a long exposure time, ranging from three to fifteen minutes, making the process nearly impractical for portraiture. (Source: "Photographic Material," by Carol Johnson. In Gathering History: the Marian S. Carson Collection of Americana, 1999, p. 100)" (http://www.loc.gov/pictures/collection/dag/item/2004664436/, accessed 02-27-2014).

View Map + Bookmark Entry

The Basis for Blueprints 1842

In 1842 English mathematician, astronomer, chemist, and experimental photographer/inventor Sir John Herschel, invented the cyanotype, a photographic process that resulted in a cyan-blue print.

"The photosensitive compound, a solution of ferric ammonium citrate and potassium ferricyanide, is coated onto paper. Areas of the compound exposed to strong light are converted to insoluble blue ferric ferrocyanide, or Prussian blue. The soluble chemicals are washed off with water leaving a light-stable print."

The process was used through the 20th century by architects and engineers for the production of blueprints.

View Map + Bookmark Entry

Christian Doppler States the Doppler Principle (Doppler Shift, Doppler Effect) 1842

In 1842 Austrian mathematician and physicist at Czech Technical University in Prague Christian Andreas Doppler published Über das farbige Licht der Doppelsterne und einige andere Gestirne des Himmels. (On the Colored Light of the Binary Stars and Some Other Stars of the Heavens). 

This was the first statement of the Doppler principle (Doppler shift, Doppler effect), which states that the observed frequency changes if either the observer or the source is moving. Doppler mentions the application of this principle to both acoustics and optics, particularly to the colored appearance of double stars and the fluctuations of variable stars and novae; however, his reasoning in the optical arguments was flawed by his erroneous belief that all stars were basically white and emitted light only or mostly in the visible spectrum. Five years later, the astronomer Hippolyte Fizeau will publish a paper announcing his independent discovery of the effect, noting the usefulness of observing spectral line shifts in its application to astronomy. This point was of such fundamental importance to Doppler's principle that it is sometimes called the Doppler-Fizeau principle. The acoustical Doppler effect was verified experimentally in 1845, and the optical effect in 1901. Modified by relativity theory, it became one of the major tools of astronomy. It also has numerous commerical applications beyond astronomy, such as in Doppler radar and in Doppler ultrasound imaging to evaluate blood flow.

View Map + Bookmark Entry

One of the Earliest Photographs of Books 1843 – 1844

William Henry Fox Talbot, one of the inventors of photography, photographed books in his library during 1843-1844. This was undoubtedly one of the earliest photographs of books. Fox Talbot later published this photograph in The Pencil of Nature

"An exceptional student first at Harrow and later at Cambridge, Talbot was a man of great learning and broad interests. Mathematics, astronomy, physics, botany, chemistry, Egyptology, philology, and the classics were all within the scope of his investigative appetite. The Philosophical Magazine, Miscellanies of Science, Botanische Schriften, Manners and Customs of the Ancient Egyptians, Philological Essays, Poetae Minores Graeci, and Lanzi's Storia pittorica dell'Italia are among the volumes represented in this photograph—truly an intellectual self-portrait. The image appeared as plate 8 in The Pencil of Nature. Paradoxically, A Scene in a Library was taken out of doors, where the light was stronger" (http://www.metmuseum.org/toah/works-of-art/2005.100.172, accessed 10-25-2011).

View Map + Bookmark Entry

The First Book Illustrated with Photographs October 1843 – 1853

In October 1843 Anna Atkins, an English amateur botanist and the first woman phtographer, published the first installment of Photographs of British Algae: Cyanotype Impressions. Atkins published this work privately with a handwritten text from her home in Sevenoaks, Kent, England. She issued a very small number of copies from cyanotypes contact printed by placing specimens directly onto coated paper, allowing the action of light to create a sillhouette effect. Photographs of British Algae was the first book illustrated with photographs, and the first serious application of photography to a scientific subject. The paper Atklns used for the first volume contains a watermark reading "Whatman Turkey Mill 1843." Atkins extended the work into three volumes, with the last part appearing in 1853. 

In May 2011 only seventeen copies of Atkins's book were recorded, in various states of completeness. Only the copy in the Royal Society seems to be complete as Atkins intended, with 389 plates.  Robert Hunt's copy, with 382 plates was sold at Christie's, London for £229,250 ($406,460) in May 2004.

♦ In December 2013 further background information and digital facsimiles were available from the NYPL Digital Gallery.

Goldschmidt & Naef, The Truthful Lens (1980) No. 5.

(This entry was last revised on 01-14-2014.)

View Map + Bookmark Entry

Foundation of Microphotography; Landmark in Hematology, Oncology, and Pathology 1844 – 1845

In 1844 and 1845 French physician Alfred François Donné published Cours de microscopie compémentaire des études médicales in Paris. The folio atlas of plates, which appeared one year after the text, included twenty plates showing engraved images of 86 microdaguerreotypes taken by medical student, later physicist Léon Foucault. Because daguerreotypes were unique images they could not be duplicated by a photographic process like prints from photographic negatives, and had to be engraved for reproduction by printing.

Donné, a French public health physician, began teaching his pioneering course on medical microscopy in 1837, a time when the medical establishment remained largely unconvinced of the microscope’s usefulness as a diagnostic and investigative tool. In July 1839 Louis Daguerre, one of the inventors of photography, announced to the Académie des Sciences his “daguerreotype” process for creating finely detailed photographic images on specially prepared glass plates. Donné immediately embraced this new art, and within a few months had created not only the first documented photographic portrait in Europe, but also the earliest method of preparing etched plates from daguerreotypes. Donné resolved to incorporate photography into his microscopy course, and in February 1840 he presented to the Académie his first photographic pictures of natural objects as seen through the microscope. “It was Alfred Donné who foresaw the helpful role that projections of microscopic pictures could play during lectures on micrography” (Dreyfus, p. 38).

Over the next few years Donné continued to refine his photomicrography methods with the help of his assistant, Léon Foucault (who would go on to have a distinguished career as a physicist).  Donne's and Foucault's work was the first biomedical textbook to be illustrated with images made from photomicrographs. Among its noteworthy images are the first microphotographs of human blood cells and platelets, and the first photographic illustration of Trichomonas vaginalis, the protozoon responsible for vaginal infections, which Donné had discovered in 1836. The text volume of the Cours contains the first description of the microscopic appearance of leukemia, which Donné had observed in blood taken from both an autopsy and a living patient. His observations mark the first time that leukemia was linked with abnormal blood pathology:

"There are conditions in which white cells seem to be in excess in the blood. I found this fact so many times, it is so evident in certain patients, that I cannot conceive the slightest doubt in this regard. One can find in some patients such a great number of these cells that even the least experienced observer is greatly impressed. I had an opportunity of seeing these in a patient under Dr. Rayer at the Hôpital de la Charité. . . . The blood of this patient showed such a number of white cells that I thought his blood was mixed with pus, but in the end, I was able to observe a clear-cut difference between these cells, and the white cells . . . "(p. 135; translation from Thorburn, pp. 379-80).

The following year this abnormal blood condition was recognized as a new disease by both John Hughes Bennett (a former student of Donné’s) and Rudolf Virchow.

Norman, Morton's Medical Bibliography (1991) nos.  267.1, 3060.1. Dreyfus, Some Milestones in the History of Hematology, pp. 38-40, 54-56, 76-78. Frizot, A New History of Photography, p. 275. Gernsheim & Gernsheim, The History of Photography 1685-1914, pp. 116, 539. Hannavy, Encyclopedia of Nineteenth-Century Photography, Vol. 1, p. 1120. Wintrobe, Hematology: The Blossoming of a Science, p. 12. Bernard, Histoire illustrée de l’hématologie, passim. Thorburn, “Alfred François Donné, 1801-1878, discoverer of Trichomonas vaginalis and of leukaemia,” British Journal of Venereal Disease 50 (1974) 377-380.

View Map + Bookmark Entry

The First Photographically Illustrated Book Commercially Published. June 1844 – April 1846

From June 1844 to April 1846 British inventor William Henry Fox Talbot published The Pencil of Nature in six fascicules in London through the firm of Longman, Brown, Green & Longmans. This work was illustrated with 24 calotypes or talbotypes, a photographic process invented by Fox Talbot in 1841, in which salted paper prints were made from paper negatives. It was the "first photographically illustrated book to be commercially published," or "the first commercially published book illustrated with photographs."  

Because the work was a complete novelty to the book-buying public Fox Tablot published a brief "Notice to the Reader" explaining the nature of the images:

"The plates of the present work are impressed by the agency of Light alone without any aid whatever from the artist's pencil. They are the sun-pictures themselves, and not, as some persons have imagined, engravings in imitation."

Fox Talbot originally intended to publish additional fascicules but discontinued publication after six because the work was a commercial failure. "The numbers of issues produced were not great in comparison to printed works for obvious reasons of technical difficulty, but were still considerable for such a pioneering endeavour. There is slight variance in the numbers quoted in different sources but it is certain over a thousand booklets of the six parts were manufactured. It is beyond dispute that 285 copies of the first pamphlet were created and, with encouraging sales figures 150 copies were produced of the second part. It seems probable that 150 copies of each of the final parts were manufactured. Fox Talbot himself sold the parts for 7/6d, 12/- and 21/-. Additionally, some of the completed series were bound together and a subscription list raised headed by Queen Victoria, while Fox Talbot also gifted a few to his family and close friends. A very few of these bound volumes still exist today" (http://special.lib.gla.ac.uk/exhibns/month/Feb2007.html, accessed 01-14-2015). Approximately 40 copies of original edition of The Pencil of Nature have survived.

Two facsimiles were published in print in the 20th century, one in the 21st. The text and images are also available online. 

(This entry was last revised on 01-14-2015.)

View Map + Bookmark Entry

The First Periodical Issued With a Mounted Paper Photograph 1846

Eager to show that paper photography was the equal to graphic media such as lithography, etching, steel and wood engraving, William Henry Fox Talbot, author of The Pencil of Nature, made a deal with Samuel Carter Hall, editor of the most important Victorian magazine on art, the Art Union Monthly Journal, to include one of his paper photographs in every copy of the June 1846 issue in Volume 8 of the journal. 

To make the approximately 6,000 calotypes needed for the Art Union issue, Fox Talbot's assistant and printer, Nicolaas Henneman, used every negative he could find in the shop. More than half of the images published in The Pencil of Nature (15 different images) also turn up in copies of the Art-Union. However, Henneman's print staff was not capable of such mass production, resulting in poor print quality. The paper was not properly exposed, nor well fixed or washed, and prints were sometimes badly pasted onto the magazine leaves. These factors caused the images to fade almost as soon as they were created, resulting in poor publicity for Talbot. Nevertheless, as few copies of Fox Talbot's The Pencil of Nature were issued, Vol. 8 of the Art Union Monthly Journal was the first periodical to be illustrated with a mounted paper photograph, and the photographs it included were the first paper photographs seen by a wide audience.

Gernsheim, Incunabula of Photography, No. 620.

Goldschmidt & Naef, The Truthful Lens (1980) p. 15.

View Map + Bookmark Entry

The Earliest Photographs of War 1847

 According to Yale's Beinicke Library's online exhibition, The Power of Pictures, depicting an exhibition at the library from October to December 2013, the earliest photographs of war were made in 1847 by an unknown daguerreotype photographer in Saltillo, Mexico.

"Twelve daguerreotypes in a walnut case depict U.S. Army troops, General John Wool and his staff, Lieutenant Abner Doubleday, the Virginia Regiment, an artillery battalion, and scenes around town. The presence of the photographer when the image was made lent an eye-witness authenticity to war photographs that paintings or prints struggled to attain. Perhaps the photographer of the Civil War, Alexander Gardner, put it best when he suggested, 'Verbal representations of such places, or scenes, may or may not have the merit of accuracy; but photographic presentments of them will be accepted by posterity with an undoubting faith' " (http://beinecke.library.yale.edu/exhibitions/social-commemoration, accessed 10-27-2013).

View Map + Bookmark Entry

1850 – 1875

James Glaisher Proposes Using Microphotography for Document Preservation 1851 – 1852

Impressed by the exhibition of photography at the Great Exhibition of 1851, English meterologist and aeronaut James Glaisher proposed that microphotography be used as a method for document preservation. According to the Wikipedia article on Microform, astronomer and photography pioneer Sir John Herschel supported this view in 1853.

Great Exhibition of the Works of Industry of All Nations of 1851. Reports by the Juries (1852). Carter & Muir, Printing and the Mind of Man (1967) no. 331.

View Map + Bookmark Entry

Paul Pretsch's "Photographic Art Treasures," the First Book of Printed Reproductions of Photographs 1854 – July 1857

In 1854 Viennese photographer resident in London Paul Pretsch patented a process called "photo-galvanography" for the printed reproduction of photographs. The first print that Pretsch issued was called "Scene in Gaeta after the Explosion." It was "the first relief half-tone and the first commercial use of half-tone" (Printing and the Mind of Man. Catalogue of the Exhibitions Held at the British Museum and at Earls Court, London [1963] No. 629).

In November 1856 Pretsch issued through his Patent-Photo-Galvano-Graphic Company the first fascicule of a book entitled in an oddly circular manner Photographic Art Treasures, or, Nature and Art Illustrated by Art and Nature. This fascicule, which also immodestly characterized itself as "A New Era in Art" on its printed cover, was the first part of the first book of printed reproductions of photographs, as distinct from books illustrated with pasted-in original photographs. A total of five fascicules were published between November 1856 and July 1857, each with 4 "photo-galvano-graphic" plates.

Pretsch's photo-galvanographic process began with a photographically exposed dichromated-gelatine mould which was made to reticulate, from which he produced a copper intaglio plate by galvanoplasty. His halftone method was not entirely original. Others had developed methods of engraving from photographs. As early as the 1830s William Fox Talbot had patented a method of using "photographic screens or veils" in connection with a photographic intaglio process.

"However, Pretsch's system achieved one thing that no others had previously managed— the inclusion of half-tones— the greys which make the photographic image unique. At the time, the half-tone dot screen had not yet been invented and all engravings from photographs such as those used in the Illustrated London News from Fenton's Crimea portraits, were hand-drawn impressions of the original photograph. Even the more advanced process which Pretsch was now attempting to market did not completely dispose of the need for long and careful hand-retouching on the part of the engraver and it took an average of six weeks hard work to prepare just one plate. After all that work, only about five hundred prints could be made before the image started to break up. As with all such processes, the first prints were of a far superior quality to the last— so a sliding scale of charges was evolved, the price depending on the state of the plate at the time the print was made. . . .

"Pretsch was no photographer, however, and he left it to others to provide the pictures for his patent process. Roger Fenton took up his appointment as manager of the Photographic Department and chief photographer, in August 1857. . . .In the short time Fenton had been employed at Holloway Place, the company's head office in Holloway Road, he had not had time to acquire prints by other photographers and so that the first publication of four prints [in the first fascicule of Photographic Art Treasures] was entirely his own work. . . ." (Hannaway, Roger Fenton of Crimble Hall [1976] 65-67).

Paul William Morgan, "Paul Pretsch, Photogalvanography and Photographic Art Treasures," accessed 01-12-2015).

Goldschmidt & Naef, The Truthful Lens (1980) No. 131.

(This entry was last revised on 01-12-2015.)

View Map + Bookmark Entry

François Willème Invents Photosculpture: Early 3D Imaging 1859

In 1859 a Frenchman in Paris, François Willème, who characterized himself as a painter, sculptor and photographer, and "inventeur de la photosculpture," began creating photosculptures of living people. To create a photosculpture Willème would arrange his subject on a circular platform surrounded by 24 cameras— one every 15 degrees. He would then photograph their silhouette simultaneously with each camera. This set of photographic profiles contained the data for a complete representation of his subject in 3 dimensions, although at relatively coarse resolution. 

Willème had now collected layer data for his subjects in the form of 24 different photographs of their profile. To create a 3D image of his subject he needed to make the information in each layer accessible by projecting each image onto a screen. Next, he translated each image into the movements required to fabricate each layer. This he accomplished using a pantograph attached to a cutter. He traced each profile with one end of the pantograph while the other end cut a sheet of wood with the exact same movement. The pantograph allowed the cuts to be smaller, larger, or the same size as the original projection. The layers of wood were then assembled to create the photosculpture. This was necessarily rough; if desired, an artist could smooth the sculpture and perhaps paint it, making it look more like a traditional sculpture.

On January 4, 1864 French poet, dramatist, novelist, journalist, art critic and literary critic Théophile Gautier published an illustrated article entitled, appropriately, "Photosculpture," in the Moniteur universel newspaper. To advertise the process this was also issued as a separate pamphlet of 14 pp., of which the last two pages consisted of a price list. 

On August 9. 1864 Willème was granted U.S. patent 43,822 for Photographing Sculpture, &c.

Historian of photography Beaumont Newhall published an article on Willèm's process entitled "Photosculpture," Image 7 no. 5 (1958) [99]-105.

Sobieszek, "Sculpture as the Sum of Its Profiles. François Willème and Photosculpture in France, 1859-1868," The Art Bulletin 62, no. 4 (1980) 617-30.

Walters & Thirkell, "New technologies for 3D realization in Art and Design practice," Artifact1 (2007) 232-245.

View Map + Bookmark Entry

"Boston as the Eagle and Wild Goose See It": the First Clear Photographic Aerial View of a City October 13, 1860

In collaboration with balloon navigator Samuel A. King on King's hot-air balloon, the "Queen of the Air," on October 13, 1860 American photographer James Wallace Black photographed Boston from a tethered balloon at 1,200 feet, producing 8 plates of glass negatives, 10 1/16 x 7 15/16 in. One good print resulted, which Black titled "Boston as the Eagle and the Wild Goose See It." This was the first clear aerial image of a city.  The original photograph is preserved in the Boston Public Library. This photograph is especially significant because much of the area photographed was destroyed in the Great Boston Fire of 1872.

View Map + Bookmark Entry

James Clerk Maxwell Produces the First Color Photograph 1861

In 1861 Scottish mathematical physicist James Clerk Maxwell produced the earliest color photograph, an image of a tartan ribbon, by having it photographed three times through red, blue, and yellow filters, then recombining the images into one color composite. Because of this photograph Maxwell is credited as the founder of the theory of additive color.

"During an 1861 Royal Institution lecture on colour theory, Maxwell presented the world's first demonstration of colour photography by this principle of three-colour analysis and synthesis. Thomas Sutton, inventor of the single-lens reflex camera, did the actual picture-taking. He photographed a tartan ribbon three times, through red, green and blue filters, as well as a fourth exposure through a yellow filter, but according to Maxwell's account this was not used in the demonstration. Because Sutton's photographic plates were in fact insensitive to red and barely sensitive to green, the results of this pioneering experiment were far from perfect. It was remarked in the published account of the lecture that "if the red and green images had been as fully photographed as the blue," it "would have been a truly-coloured image of the riband. By finding photographic materials more sensitive to the less refrangible rays, the representation of the colours of objects might be greatly improved." Researchers in 1961 concluded that the seemingly impossible partial success of the red-filtered exposure was due to ultraviolet light. Some red dyes strongly reflect it, the red filter used does not entirely block it, and Sutton's plates were sensitive to it." (Wikipedia article on James Clerk Maxwell, accessed 10-24-2013).

View Map + Bookmark Entry

3-D Solar Imaging Reveals Details of Sunken Civil-War Era Steampship January 11, 1863

On January 11, 1863 the USS Hatteras, an iron-hulled steamship converted into a gunboat by the U.S. Navy, was taken by surprise and sunk in an engagement  with the disguised Confederate commerce raider CSS Alabama, approximately 20 miles off the coast of Galveston, Texas.

The hull of Hatteras rests in approximately 60 ft (18 m) of water  and is buried under about 3 ft (0.91 m) of sand. Her steam engine and two iron paddle wheels remain on the ocean bottom. The wreck is monitored to ensure that it is not damaged by oil and gas development in the area.

On January 11, 2013, 150 years after the battle, a 3-D sonar map was released by NOAA's (the National Oceanic & Atmospheric Administration) Office of National Marine Sanctuaries, together with ExploreOcean, Teledyne Blueview, and Northwest Hydro showed never-before seen details of the USS Hatteras, the only Union warship sunk in combat in the Gulf of Mexico during the Civil War.

View Map + Bookmark Entry

The Pigeon Post into Paris: The First Important Application of Microfilm September 19, 1870 – January 28, 1871

During the four and a half months, from September 19, 1870 to January 28, 1871, of the Siege of Paris by German armies in the Franco-Prussian War normal channels of communication were interrupted, and the only way for the provincial government in Tours to communicate with Paris was by pigeon post.

During the Siege French photographer and inventor of microfilm René Dagron proposed using his microfilming process to carry messages by carrier pigeons. Dagron was not the first to produce microfilms, examples of which were shown by John Benjamin Dancer during the 1850s. The process was sufficiently well known that on July 9, 1853 John F. W. Herschel published a letter in the Athenaeum suggesting the microfilming of "reference materials." However, Dagron was the first to systematize and patent the process, publishing in 1864 a small illustrated booklet of 36 pages in 12mo entitled Traité de photographie microscopique giving details of his process and a price list of his equipment and supplies. This was the world's first treatise on microfilming techniques

Rampont, the man in charge of the carrier pigeon program, agreed to Dagron's proposal, and a contract was signed on November 11, 1871.

"According to the contract Dagron was to be paid 15 francs per 1000 characters photographed. A clause in the contract, signed by an official named Picard, gave Dagron the title of "chief of the photomicroscopic correspondence postal service" mentioning in French: 'M. Dagron a le titre de chef de service des correspondences postales photomicroscopiques. Il relève directement du Directeur Général des Postes,' which translates as 'Mr. Dagron has the title of the chief of the photomicroscopic correspondence postal service. He reports directly to the Director General of the Post Office.'

"After a period of difficulties and through hardships brought on by the war and the lack of equipment, Dagron finally achieved a photographic reduction of more than 40 diameters. The microfilms so produced weighed approximately 0.05 grams each and a pigeon was able to carry up to 20 at a time. Up to that point a page of a message could be copied in a microfilm approximately measuring 37 mm by 23 mm but Dagron was able to reduce this to a size of approximately 11 mm by 6 mm which was a significant reduction in the area of the microphotograph.

"Dagron photographed pages of newspapers in their entirety which he then converted into miniature photographs. He subsequently removed the collodion film from the glass base and rolled it tightly into a cylindrical shape which he then inserted into miniature tubes that were transported fastened on the wings of pigeons. Upon receipt the microphotograph was reattached to a glass frame and was then projected by magic lantern on the wall. The message contained in the microfilm could then be transcribed or copied. By 28 January 1871, when Paris and the Government of National Defense surrendered, Dagron had delivered 115,000 messages to Paris by carrier pigeon" (Wikipedia article on René Dagron, accessed 04-26-2009).

After the seige was over Dagron issued from Paris in 1871 a very small 24-page pamphlet in 12mo format describing the achievements of the Pigeon Post, La poste par pigeons voyageurs. Souvenir du Siège du Paris. Spécimen identique d'une des pellicules de dépêches portées a Paris par pigeons voyageurs. When issued the pamphlets contained actual samples of two pieces of microfilm presented in a glassine envelope inserted in a small printed folder inside the pamphlet.  Most of the surviving copies of the pamphlet no longer contain the microfilms.

J. D. Hayhurst, The Pigeon Post into Paris 1870-1871(1970) provides a comprehensive account, and reproduces a number of original documents including photomicrographs.

(This entry was last revised on 01-12-2015.)

View Map + Bookmark Entry

Darwin Founds Ethology, Studies the Conveyance of Information, and Contributes to Psychology 1872

In 1872 Charles Darwin issued The Expression of the Emotions in Man and Animals through his publisher, John Murray. This book, which contained numerous wood-engraved text illustrations, was also illustrated with seven heliotype plates of photographs by pioneering art photogapher Oscar Gustave Rejlander, and was the only book by Darwin illustrated with photographs.

“With this book Darwin founded the study of ethology (animal behavior) and conveyance of information (communication theory) and made a major contribution to psychology” (DSB). Written as a rebuttal to the idea that the facial muscles of expression in humans were a special endowment, the work contained studies of facial and other types of expression (sounds, erection of hair, etc.) in man and mammals, and their correlation with various emotions such as grief, love, anger, fear and shame. The results of Darwin’s investigations showed that in many cases expression is not learned but innate, and enabled Darwin to formulate three principles governing the expression of emotions—relief of sensation or desire, antithesis, and reflex action.

View Map + Bookmark Entry

Eadweard Muybridge Produces the First Photographs of Motion 1872 – June 15, 1878

In 1872 former governor of California and railroad tycoon Leland Stanford, founder of Stanford University, hired the English photographer Eadweard Muybridge to settle a debate whether, during its gait, all four of a horse's hooves are simultaneously off the ground. This challenged Muybridge to look for a way to capture the sequence of movement. In 1878, after six years of work on the project, Muybridge succeeded. He arranged 12 trip-wire cameras along a racetrack in the path of a galloping horse. The resulting photo sequence proved that there is a point when no hooves touch the ground and set the stage for the first motion pictures.

"In 1872, Muybridge settled Stanford's question with a single photographic negative showing his Standardbred trotting horse Occident airborne at the trot. This negative was lost, but the image survives through woodcuts made at the time (the technology for printed reproductions of photographs was still being developed). He later did additional studies, as well as improving his camera for quicker shutter speed and faster film emulsions. By 1878, spurred on by Stanford to expand the experiments, Muybridge had successfully photographed a horse at a trot; lantern slides have survived of this later work. . . .

"Stanford also wanted a study of the horse at a gallop. Muybridge planned to take a series of photos on 15 June 1878 at Stanford's Palo Alto Stock Farm. He placed numerous large glass-plate cameras in a line along the edge of the track; the shutter of each was triggered by a thread as the horse passed (in later studies he used a clockwork device to set off the shutters and capture the images). The path was lined with cloth sheets to reflect as much light as possible. He copied the images in the form of silhouettes onto a disc to be viewed in a machine he had invented, which he called a zoopraxiscope. This device was later regarded as an early movie projector, and the process as an intermediate stage toward motion pictures or cinematography.

"The study is called Sallie Gardner at a Gallop or The Horse in Motion; it shows images of the horse with all feet off the ground. This did not take place when the horse's legs were extended to the front and back, as imagined by contemporary illustrators, but when its legs were collected beneath its body as it switched from "pulling" with the front legs to "pushing" with the back legs" (Wikipedia article on Eadweard Muybridge, accessed 10-24-2013).

View Map + Bookmark Entry

Henry Stevens Calls for a Central Bibliographical Bureau Which Would Also Store Images July 25 – November 29, 1872

American antiquarian bookseller and bibliographer Henry Stevens  published an auction catalogue of books, manuscripts, maps, and charts verbosely titled as follows:

Bibliotheca geographica & historica or a catalogue of a nine days sale of rare & valuable ancient and modern books maps charts manuscripts autograph letters et cetera illustrative of historical geography & geographical history general and local. . . collected used and described. With an introduction on the progress of geography and notes annotatiunculae [sic] on sundry subjects together with an essay upon the Stevens system of photobibliography. Part I. To be dispersed by auction . . . [in] London the 19th to 29th November 1872.

In his essay introductory to the catalogue entitled Photobibliography. A Word on Catalogues and How to Make Them Stevens calls for "A Central Bibliographical Bureau" which would produce standard bibliographical descriptions of items that could be used by other cataloguers and bibliographers.  Analogous to what later became national union catalogues of books, Stevens imagined that this could "be made self-supporting or even remunerative, like the Post Office."  He also called for a standardized system of recording reduced size images called "photograms" of books according to "one uniform scale." This would reduce "all the titles, maps, woodcuts, or whatever is desired to copy" to fit the images onto standardized filing cards on which bibliographical details could be written by hand, to spare the bibliographer the time and effort of transcribing title pages.  Negatives would be stored compactly, and prints made for reproduction in printed catalogues, etc. As examples Stevens had an albumen print of a title page pasted in as the frontispiece of the auction catalogue, plus a small circular photograph of "Ptolemy's World by Mercator" pasted onto the title page.   Stevens noted the he also made available a few copies of the auction catalogue on thicker paper with about 400 pasted-on "photograms."

Stevens later expanded on this idea in a paper entitled "Photobibliography, or a Central Bibliographical Clearing-House" presented to the 1877 Conference of Librarians held in London (see "Transactions and Proceedings of the Conference", pp. 70-81). In 1878 Stevens published privately a 16mo pamphlet of 49pp. entitled, Photo-Bibliography; or, a Word on Printed Card Catalogues of old, rare, beautiful, and costly books, and how to make them on a Co-operative System; and Two Words on the Establishment of a Central Bibliographical Bureau, or Clearing-house, for Librarians.  Bigmore & Wyman, A Bibliography of Printing (1880) III, 401.

View Map + Bookmark Entry

The "Daily Graphic" of New York, Probably the First Illustrated Daily Newspaper, Begins Publication March 4, 1873 – September 23, 1889

On March 4, 1873 the Daily Graphic of New York was founded. This tabloid, which was probably the first illustrated daily newspaper, remained in operation until September 23, 1889. 

On March 4, 1880 the Daily Graphic published the first halftone rather than engraved reproduction of a news photograph.

View Map + Bookmark Entry

1875 – 1900

"Street Life in London": Pioneering Social Documentary Photography as a Form of Photojournalism 1876 – 1877

From 1876 to 1877 Scottish photographer, geographer and traveler John Thomson, in collaboration with the radical journalist Adolphe Smith, published a monthly magazine, Street Life in London illustrated with Woodburytype photomechanical reproductions of photographs. The twelve parts were collected and issued in book form by Sampson Low, Marston, Searle and Rivington in 1877. The project documented in photographs and text the lives of street people of London. Smith's short essays were based on interviews with a range of men and women who eked out a precarious and marginal existence working on the streets, including flower-sellers, chimney-sweeps, shoe-blacks, chair-caners, musicians, dustmen, locksmiths, beggars and petty criminals. However, Thomson's photographs conveyed even more information. Out of a genuine concern for their welfare and living conditions, Thomson introduced social documentary photography as a form of photojournalism. Instead of the images acting as a supplement to the text, his photomechanically reproduced photographs became the the predominant medium for the imparting of information, successfully combining photography with the printed word.

View Map + Bookmark Entry

The First Supersonic Image; The Mach Angle and Mach Number 1877

In 1877 Austrian physicists Ernst Mach and P. Salcher in Prague published "Photographische Fixirung der durch Projectile in der Luft eingeleiteten Vorgänge," Sitzungsber. k. Akad. Wiss., math.-naturwiss. Classe, 95 (1887) 764-80. The paper reproduced the first photograph of a shock wave in front of an object (in this case a bullet) moving at supersonic speed, and the first mathematical formula describing the physics of the shock wave.

“The angle α, which the shock wave surrounding the envelope of an advancing gas cone makes with the direction of its motion, was shown to be related to the velocity of sound ν and the velocity of the projectile ω as sinα = ν/ω when ω > ν. After 1907, following the work of Ludwig Prandtl at the Kaiser Wilhelm Institut für Strömungsforschung in Göttingen, the angle α was called the Mach angle. Recognizing that the value of ω/ν (the ratio of the speed of an object to the speed of sound in the undisturbed medium in which the object is traveling) was becoming increasingly significant in aerodynamics for high-speed projectile studies, J. Ackeret in his inaugural lecture in 1929 as Privatdozent at the Eidgenössischen Technische Hochschule, Zürich, suggested the term ‘Mach number’ for this ratio" (Dictonary of Scientific Biography).

Anderson, History of Aerodynamics (1999) 376. 

View Map + Bookmark Entry

Invention of Photogravure 1878

In 1877 Czech painter, photographer and illustrator Karel Václav Klíč (Karl Klietsch) became one of the inventors of photogravure.

"The earliest forms of photogravure were developed in the 1830s by the original pioneers of photography itself, Henry Fox Talbot in England and Nicéphore Niépce in France. They were seeking a means to make prints that would not fade, by creating photographic images on plates that could then be etched. The etched plates could then be printed using a traditional printing press. These early images were among the first photographs, pre-dating daguerreotypes and the later wet-collodion photographic processes. Fox Talbot worked on extending the process in the 1850s and patented it in 1852 ('photographic engraving') and 1858 ('photoglyphic engraving'). Photogravure in its mature form was developed in 1878 by Czech painter Karel Klíč, who built on Talbot's research. This process, the one still in use today, is called the Talbot-Klič process" (Wikipedia article on photogravure, accessed 02-05-2012).

View Map + Bookmark Entry

Could Life From Other Planets Have Been Carried to Earth by Meteorites? 1880

In 1880 Lawyer, Swedenborgian, poet, agent for Canadian emmigration, economist, and amateur petrologist in Reutlingen, Baden-Württemberg, Germany Otto Hahn published Die Meteorite (Chondrite) und ihre Organismenwith 32 plates containing 144 images of photomicrographs of cross-sections of meteorites. Hahn claimed that the mysterious structures shown in his photographs were  evidence of fossilized plants and simple animals, carried within meteorites from extra-terrestrial origins.

Though other scientists realized that Hahn had confused mineral structures with organic structures, it was claimed, without concrete substantiation, that Darwin enthusiastically endorsed Hahn's interpretation, even making an uncharacteristic reference to God in the context. See The Complete Works of Charles Darwin Online at this link (accessed 05-28-2009). Darwin did own copies of Hahn's works and may also have visited with Hahn at Down House.

My thanks to Jörn Koblitz of MetBase for this reference.

View Map + Bookmark Entry

Foundation of Brain Imaging 1880

In "Sulla circolazione del sangue nel cervello dell’uomo. Ricerche sfigmografiche," Reale Accademia dei Lincei. Memorie, 3rd series, 5 (1879-80) published in Rome in 1880 Italian physiologist Angelo Mosso reported his discovery that blood circulation in the brain increases in certain discrete areas during mental activity, and published the records of this activity produced by the machine he invented to record these changes. As the first method of imaging brain function, Mosso's work paved the way for modern-day brain imaging techniques such as CT scans, PET scans and magnetic resonance imaging.

“Italian physiologist Angelo Mosso was the first to experiment with the idea that changes in the flow of blood in the brain might provide a way of assessing brain function during mental activity. Mosso knew that, in newborn children, the fontanelles—the soft areas on a baby’s head where the bones of the skull are not yet fused—can be seen to pulsate with the rhythm of the heartbeat. He noticed similar pulsations in two adults who had suffered head injuries that left them with defects of the skull, and observed, in particular, a sudden increase in the magnitude of those pulsations when the subjects engaged in mental activities” (Kolb & Whishaw, Fundamentals of Human Neuropsychology,  132).)

Mosso devised a graphic recorder to document these pulsations, demonstrating that blood pressure changes in the brain caused by mental exertion occur independently of any pressure changes in the rest of the body. Mosso concluded that brain circulation changes selectively in accordance with mental activity, stating that “we must suppose a very delicate adjustment whereby the circulation follows the needs of the cerebral activity. Blood very likely may rush to each region of the cortex according as it is most active” (quoted in Shepherd, Creating Modern Neuroscience, 185).

View Map + Bookmark Entry

Gaston Tissandier Issues The First Book on Aerial Photography 1886

French chemist, meteorologist, aviator and editor Gaston Tissandier published La photographie en ballon.  This pamphlet included a frontispiece consisting of an original photographic print by Jacques Ducom mounted on stiff card with a tissue overlay key. The key was thought necessary to explain the photograph because people were completely unaccustomed to looking at images from an aerial point of view.

The history of aerial photography began in 1858, when the photographer Nadar took the first photographs from a balloon. His results were only partially successful, as were those of other experimenters who followed him, and it was not until 1878, when factory-made gelatin dry plates were introduced, that aerial photography came into its own. Using gelatin plates, which were twenty times faster than the old wet-collodion plates, the photographer Paul Desmarets obtained two birds-eye views of Rouen in 1880 from a balloon at 4,200 feet. However, Desmarets' results were surpassed five years later by Jacques Ducom, who, in a balloon navigated by Gaston Tissandier, was able to take superb aerial photographs of Paris from a height of 1,800 feet.

"Ducom's view of the Ile Saint-Louis, Paris from 1,800 ft leaves absolutely nothing to be desired. Through a magnifying glass people can be counted on the bridge. The exposure of this and the other photographs taken on this flight was 1/50 second, using a specially constructed guillotine shutter which was opened pneumatically and closed automatically with a rubber spring" (Gernsheim & Gernsheim, The History of Photography 1685-1914 p. 508). Tissandier's La photographie en ballon records his and Ducom's achievements in aerial photography, and also surveys the work of Nadar, Desmarets, Shadbolt, Triboulet, Pinard, Weddel and other aerial photographers. The preface mentions the pioneering aerial photograph of Boston taken in 1860 by J. W. Black from a tethered balloon at 1,200 feet. Tissandier, who saw a print of Black's photograph, described it as "assurément fort curieuse, mais comme les précédentes elle manque de netteté et semble en outre avoir été prise  très faible hauteur" (p. vi). Gernsheim & Gernsheim, pp. 507-8. Frizot, A New History of Photography, p. 391.

View Map + Bookmark Entry

"Le Journal Illustré" Publishes the First Photo-Interview September 5, 1886

On September 5, 1886 Le Journal Illustré in Paris published on pp. 284-88 "L'Art de vivre cent ans. Trois entretiens avec Monsieur Chevreul." This appeared in Vol. 23, No. 36 of the periodical.  Besides the portrait of Chevreul on the cover, the article included  half-tone reproductions of a series of twelve unposed photographs taken on August 18, 1886 by photographer Paul Nadar of his father, the photographer and aeronaut Félix Nadar, interviewing the chemist and sceptic Michel Eugène Chevreul on Chevreul's 100th birthday. This was the first photographic interview, sometimes called the first media interview. 

In front of the camera, Nadar and Chevreul discussed photography, color theory, Molière and Pasteur, the scientific method, the crazy ideas of balloonists, and – of course – how to live for 100 years. It was a lively and interesting conversation between two legends of the 19th century: one born before the French revolution; the other destined to see the marvels of the airplane and motion pictures.  

In 2012 ABC Australia made a commercial documentary film re-creating the interview in the style of an early motion picture.  

Auer, Paul Nadar. Le premier interview photographique. Chevreul. Félix Nadar. Paul Nadar (1999), included a reduced-size fold-out reproduction of the issue of Le Journal Illustré in which the photo-interview was published so that the images could be viewed side-by-side in sequence.

View Map + Bookmark Entry

The Telautograph July 31, 1888

Inventor Elisha Gray of Highland Park, Illinois received the first of six patents for the Telautograph, an early precursor of the fax machine.  

The telautograph transmitted electrical impulses recorded by potentiometers at the sending station to servomechanisms attached to a pen at the receiving station, reproducing a drawing or signature made by the sender at the receiving station.  It was the first device to transmit drawings to a stationary sheet of paper; previous inventions in Europe had used rotating drums to record these transmissions.

In an interview in The Manufacturer & Builder (Vol. 24: No. 4 (1888) 5–86) Gray made this statement:

"By my invention you can sit down in your office in Chicago, take a pencil in your hand, write a message to me, and as your pencil moves, a pencil here in my laboratory moves simultaneously, and forms the same letters and words in the same way. What you write in Chicago is instantly reproduced here in fac-simile. You may write in any language, use a code or cipher, no matter, a fac-simile is produced here. If you want to draw a picture it is the same, the picture is reproduced here. The artist of your newspaper can, by this device, telegraph his pictures of a railway wreck or other occurrences just as a reporter telegraphs his description in words. The telautograph became very popular for the transmission of signatures over a distance, and in banks and large hospitals to ensure that doctors' orders and patient information were transmitted quickly and accurately" (quoted in Wikipedia article on Telautograph, accessed 03-02-2011).

Gray's patents on the telautograph are:

Gray, Elisha. "Art of Telegraphy", United States Patent 386,814, July 31, 1888.

Gray, Elisha. "Telautograph", United States Patent 386,815, July 31, 1888.

Gray, Elisha. "Telautograph", United States Patent 461,470, October 20, 1891.

Gray, Elisha. "Art of and Apparatus for Telautographic Communication", United States Patent 461,472, October 20, 1891.

Gray, Elisha. "Telautograph", United States Patent 491,347, February 7, 1893.

Gray, Elisha. "Telautograph", United States Patent 494,562, April 4, 1893.

Jean Renard Ward, History of Pen and Gesture Computing http://rwservices.no-ip.info:81/pens/biblio70.html#Gray1888b, accessed 03-02-2011

View Map + Bookmark Entry

One of the Most Dramatic Problems in the Preservation of Media 1889 – 1955

In 1889 inventor and entrepreneur George Eastman of Rochester, New York used Cellulose Nitrate as a base for photographic roll film. Cellulose nitrate was used for photographic and professional 35mm motion picture film until the 1950s, eventually creating one of the most dramatic problems in the preservation of media.

"It is highly inflammable and also decomposes to a dangerous condition with age. When new, nitrate film could be ignited with the heat of a cigarette; partially decomposed, it can ignite spontaneously at temperatures as low as 120 F (49C). Nitrate film burns rapidly, fuelled by its own oxygen, and releases toxic fumes.

"Decomposition: There are five stages in the decomposition of nitrate film:

"(i) Amber discolouration with fading of picture.
"(ii) The emulsion becomes adhesive and films stick together; film becomes brittle.
"(iii) The film contains gas bubbles and gives off a noxious odour
"(iv) The film is soft, welded to adjacent film and frequently covered with a viscous froth
"(v) The film mass degenerates into a brownish acrid powder.

"Film in the first and second stages can be copied, as may parts of films at the third stage of decomposition. Film at the fourth or fifth stages is useless and should be immediately destroyed by your local fire brigade because of the dangers of spontaneous combustion and chemical attack on other films. Contact your local environmental health officer about this.

"It has been estimated that the majority of nitrate film will have decomposed to an uncopiable state by the year 2000, though archives are now deep-freezing film."

View Map + Bookmark Entry

"How the Other Half Lives": Pioneering Photojournalistic Muckraking 1890

Camera lenses of the 1880s were slow, as was the emulsion of photographic plates, the technology used before film negatives. Thus photographers could not take pictures in the dark or in most interior scenes. However, in early 1887 Danish American social reformer, journalist and photographer Jacob Riis of New York learned that flash power, a mixture of magnesium with potassium chlorate and antimony sulfide for added stability, could be used in a pistol-like device that fired cartridges, for flash photography. Using this technology Riis illustrated his book How the Other Half Lives: Studies among the Tenements of New York, published in 1890.  

"The title of the book is a reference to a sentence by French writer François Rabelais, who famously wrote in Pantagruel: "one half of the world does not know how the other half lives" ("la moitié du monde ne sait pas comment l'autre vit").

"In How the Other Half Lives Riis describes the system of tenement housing that had failed, as he claims, due to greed and neglect from wealthier people. He claims a correlation between the high crime rate, drunkenness and reckless behaviour of the poor and their lack of a proper home. Chapter by chapter he uses his words and photographs to expose the conditions inhabited by the poor in a manner that “spoke directly to people's hearts”.

"He ends How the Other Half Lives with a plan of how to fix the problem. He asserts that the plan is achievable and that the upper classes will not only profit financially from such ventures, but have a moral obligation to tend to them as well.

"How the Other Half Lives: Studies among the Tenements of New York explained not only the living conditions in New York slums, but also the sweatshops in some tenements which paid workers only a few cents a day. The book explains the plight of working children; they would work in factories and at other jobs. Some children became garment workers and newsies (newsboys).

"The effect was the tearing down of New York's worst tenements, sweatshops, and the reformation of the city's schools. The book led to a decade of improvements in Lower East Side conditions, with sewers, garbage collection, and indoor plumbing all following soon after, thanks to public reaction" (Wikipedia article on How the Other Half Lives, accessed 01-12-2013).

My attention to Riis's book was drawn by an article published in The New York Times on January 11, 2014 by journalist Ted Gup, entitled "The 1890 Book I Had to Have." It described Gup's experience in 2009-2010 buying the author's annotated copy of Riis's How the Other Half Lives, his appreciation of the unique copy, and the book's relevance to socio-economic problems today.

In January 2014 an audio version of Riis's complete book was available from LibriVox.org at this link.

View Map + Bookmark Entry

The First Illustrated Song: Precursor of the Music Video 1894

In 1894 sheet music publishers Edward B. Marks and Joe Stern hired electrician George Thomas and various performers to promote sales of their song "The Little Lost Child." Using a magic lantern, Thomas projected a series of still images on a screen during live performances of the song. As a result of the illustrated song performances, "The Little Lost Child" became a nationwide hit, selling more than two million copies of its sheet music. The illustrated song became a popular form of entertainment, and is considered the first step toward music video

"The Edward B. Marks Music Company was founded in 1894 by Mr. E. B. Marks, a traveling salesman of hooks, eyes, and whalebones who teamed up with a necktie salesman, Joseph W. Stern. Originally called Joseph W. Stern & Co., because Marks did not want to risk losing his regular job, it was among the first firms to usher in the modern era in pop music, which it did from a 100-square-foot basement space at 304 E. 14th Street near Second Avenue in Manhattan. Their success was launched with a song they penned themselves (Marks as lyricist and Stern as composer), a tear jerker in the popular current of the day called “The Lost Child.” This was followed up with their own first publication, another  “weeper” called “Mother Was a Lady.” (Among the many firsts accredited to Marks is the first-ever music video, when he accompanied performances of “The Lost Child” with graphic colored-lantern slides which were screened opposite the performer.)" (http://www.ebmarks.com/about/, accessed 01-23-2014).

View Map + Bookmark Entry

Rontgen Discovers X-Rays November 8, 1895

Because physicist Wilhelm Conrad Röntgen had his lab notes burned after his death, there are conflicting accounts of the discovery of X-rays, but this is a likely reconstruction: while investigating cathode rays with a fluorescent screen painted with barium platinocyanide and a Crookes tube, which he had wrapped in black cardboard so the visible light from the tube wouldn't interfere, Röntgen, then teaching at the University of Würzburg, noticed a faint green glow from the screen, about one meter away. The invisible rays coming from the tube to make the screen glow were passing through the cardboard. He found they could also pass through books and papers on his desk. These events probably occurred on November 8, 1895.

Upon investigation Röntgen found that the fluorescence was caused by unknown rays, originating from the spot where cathode rays hit the glass wall of the vacuum tube. These unknown rays he temporarily designated X-rays.

Röntgen discovered the medical use of X-rays when he saw a picture of his wife's hand on a photographic plate formed due to X-rays on December 22, 1895. This inadvertent photograph of his wife's hand was the first X-ray photograph of a part of the human body.

In his initial report on the discovery Röntgen described the rays' photographic properties and their amazing ability to penetrate all substances, even living flesh. Although he was unable to determine the true physical nature of the rays, Röntgen was certain that he had discovered something entirely new.  He published his initial report, "Eine neue Art von Strahlen," in the relatively obscure Sitzungs-Bericht der physiikalisch-medicinischen Gesellschaft zu Würburg at the end of December 1895. The advantage of publishing in this obscure journal was that Röntgen obtained extremely rapid publication. The publishers of the journal issued offprints of the paper for commercial sale. These offprints went through several printings, reflecting unusually wide interest in the discovery from the international scientific and medical community. X-rays were among the most rapidly adopted and exploited scientific discoveries. Within a year roughly 1000 publications appeared on the subject.

For this discovery Röntgen received the first Nobel Prize in Physics in 1901.

Hook & Norman, The Haskell F. Norman Library of Science and Medicine (1991) no. 1841.

View Map + Bookmark Entry

1900 – 1910

Early Facsimile Transmission Circa 1901 – 1907

From 1901 to 1907 electrical engineer Arthur Korn of Munich invented an effective system of telephotography, or fax, called the Bildtelegraph.

Bildtelegraph became "widespread in continental Europe especially since a widely noticed transmission of a wanted-person photograph from Paris to London in 1908, used until the wider distribution of the radiofax. Its main competitors were the Bélinograf by Édouard Belin first, then since the 1930s the Hellschreiber, invented in 1929 by Rudolf Hell, a pioneer in mechanical image scanning and transmission" (Wikipedia article on Fax, accessed 04-22-2009).

View Map + Bookmark Entry

"Berliner Illustrirte Zeitung", the First Photographically Illustrated News Magazine 1901

In 1901, when it became technically feasible to print halftones of photographs inside a magazine, publisher Leopold Ullstein introduced this innovation into the Berliner Illustrirte Zeitungdeveloping it into the prototype of the modern news magazine. The magazine pioneered the photo-essay, maintained a specialized staff and production unit for photographs and a photo library.

View Map + Bookmark Entry

The Photomicrographic Book 1907

In 1907 engineer Robert Goldschmidt and Belgian author, entrepreneur, visionary, lawyer and peace activist Paul Otlet published "Sur une forme nouvelle du livre-- le livre microphotographique" in l'Institut international de bibliographie bulletin. In this paper they "proposed the livre microphotographique as a way to alleviate the cost and space limitations imposed by the codex format. Otlet’s overarching goal was to create a World Center Library of Juridical, Social and Cultural Documentation, and he saw microfiche as way to offer a stable and durable format that was inexpensive, easy to use, easy to reproduce, and extremely compact" (Wikipedia article on Microform, accessed 04-26-2009). 

View Map + Bookmark Entry

Curtis's The North American Indian 1907 – 1930

IN 1907, using funds supplied by J. Pierpont Morgan, entrepreneur and photographer Edward S. Curtis began publication and sale by subscription in Seattle, Washington, of The North American Indian, Being a Series of Volumes Picturing and Describing the Indians of the United States and Alaska.

The massive work was written and illustrated by Curtis, and edited by anthropologist Frederick Webb Hodge. Volume one contained an introduction by Theodore Roosevelt. The original publication project was intended to occur over five years.  Twenty-three years later the work was finally complete,  in 20 volumes of text and illustrations, and 20 large portfolios, including 723 leaves of photogravure reproductions of photographs.

"This publication follows the nineteenth-century Euro-American tradition of capturing the 'otherness' of indigenous American Indian life in photography and narrative chronicles. It is set apart by its ambitious scale, and by the striking effect of its images, which are essentially contrived reconstructions rather than true documentation.

"Originally planned for five years, the complicated project was slowed by prohibitive expenses. Public reception was mixed. Less than half of 500 projected sets were printed. Scholars, while interested in staff notes on vocabulary and lore, were dubious of Curtis’s methods of observation. In the 1970s the photographs began to enjoy a nostalgic revival in reprints, and have had a lasting, if controversial, influence on views of the American Indian" (http://curtis.library.northwestern.edu/curtis/aboutwork.html).

"The lavishly illustrated volumes were printed on the finest paper (Dutch etching stock or Japanese tissue paper) and bound in expensive leather, making the price prohibitive for all but the most avid collectors and libraries.

"Subscriptions started at $3000 on the Van Gelder paper in 1907; by 1924 the base price had risen to $4200.

"Although the plan was to sell 500 sets, it appears that Curtis secured just over 220 subscriptions over the course of the project, and printed less than 300 sets.

"In 1935 the assets of the project were liquidated, and the remaining materials were sold to the Charles Lauriat Company, a rare book dealer in Boston. Lauriat acquired nineteen unsold sets of The North American Indian, thousands of individual prints, sheets of unbound paper, and the handmade copper photogravure plates. The book dealer printed a sales brochure and sold nearly seventy more sets at the reduced price of $1245 each. The sets sold apparently included the nineteen remaining original sets plus additional ones made up from loose sheets and newly printed plates" (http://curtis.library.northwestern.edu/curtis/description.html).

View Map + Bookmark Entry

1910 – 1920

The Basis for Computed Tomography 1917

In 1917 Austrian mathematician Johann Radon, professor at Technische Universität Wien, introduced the Radon transform. He also demonstrated that the image of a three-dimensional object can be constructed from an infinite number of two-dimensional images of the object.

About sixty-five years later Radon's work was applied in the invention of computed tomography.

View Map + Bookmark Entry

The First Experimental Proof of General Relativity November 6, 1919

Among the experimental results predicted by Albert Einstein’s 1916 theory of general relativity was the bending of light by massive bodies due to the curvature of spacetime (space-time) in their vicinity. To test this prediction, Astronomer Royal Frank Watson Dyson and astronomer Arthur Stanley Eddington organized two expeditions—one to Principe Island off West Africa, and the other to Sobral in Brazil—for the purpose of observing the solar eclipse on May 29, 1919; the sun served as the “massive body,” and an eclipse was necessary in order to observe the light coming from other stars.

“The results were in agreement with Einstein’s prediction, the Sobral result being 1.98 ± 0.12 arcsec and the Principe result 1.61 ± 0.3 arcsec [about twice the amounts predicted by Newtonian theory]. Because of the technical difficulty of these observations, the precise value of the deflection remained a controversial issue, which was not laid to rest until the development of radio interferometric techniques in the 1970s” (Twentieth Century Physics III, 1722-23).

On November 6, 1919  Dyson reported to a joint meeting of the Royal Society and the Royal Astronomical Society concerning A Determination of the Deflection of Light by the Sun’s Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919. The paper, reproducing photographs of the eclipse made by Eddington, was published in the Philosophical Transactions of the Royal Society in 1920.

In response to the paper, the president of the Royal Society, Sir J.J.Thomson, said,

“This is the most important result obtained in connection with the theory of gravitation since Newton’s day, and it is fitting that it should be announced at a meeting of the society so closely connected with him. . . . The result [is] one of the highest achievements of human thought” (quoted by Pais, Subtle is the Lord, 305). 

On November 7 confirmation of Einstein’s discovery was headlined in The Times of London, and on November 9 in The New York Times. This article was copied or adapted by newspapers all over the world, and it had the effect of turning Einstein, whose fame had previously been limited to the theoretical physics community, into a world-famous celebrity.  For the rest of his life Einstein remained the world’s most famous scientist, and relativity remained the puzzling, but fascinating subject that most people did not believe they could understand.

View Map + Bookmark Entry

1920 – 1930

Invention of the Iconoscope, the First Electronic Television Camera 1923

In 1923 Vladimir Zworykin, a Russian immigrant to the United States, working at Westinghouse Laboratories in Pittsburgh, patented the iconoscope, the first electronic television camera. His design, however, was incomplete:

"Vladimir Zworykin is also sometimes cited as the father of electronic television because of his invention of the iconoscope in 1923 and his invention of the kinescope in 1929. His design was one of the first to demonstrate a television system with all the features of modern picture tubes. His previous work with Rosing on electromechanical television gave him key insights into how to produce such a system, but his (and RCA's) claim to being its original inventor was largely invalidated by three facts: a) Zworykin's 1923 patent presented an incomplete design, incapable of working in its given form (it was not until 1933 that Zworykin achieved a working implementation), b) the 1923 patent application was not granted until 1938, and not until it had been seriously revised, and c) courts eventually found that RCA was in violation of the television design patented by Philo Taylor Farnsworth, whose lab Zworykin had visited while working on his designs for RCA. 

"The controversy over whether it was first Farnsworth or Zworykin who invented modern television is still hotly debated today. Some of this debate stems from the fact that while Farnsworth appears to have gotten there first, it was RCA that first marketed working television sets, and it was RCA employees who first wrote the history of television. Even though Farnsworth eventually won the legal battle over this issue, he was never able to fully capitalize financially on his invention" (http://www.statemaster.com/encyclopedia/Colour-television, accessed 12-22-2009).

View Map + Bookmark Entry

A Massive Central Library on Microform for Printing on Demand 1925

In 1925 Robert B. Goldschmidt and Paul Otlet published "La Conservation et la diffusion internationale de la pensée" as publication no. 144 of the Institut international de bibliographie (Brussels). This work described their plans for a massive library where each volume existed as master negatives and positives on microform, and where items were printed on demand for interested patrons.

View Map + Bookmark Entry

1930 – 1940

Kodachrome, the First Color Transparency Film for Cinematography and Still Photography, is Developed 1935 – December 30, 2010

Kodachrome, the first color transparency film, was invented by musicians Leopold Godowsky, Jr. and Leopold Mannes. The project began even before the two young men graduated from high school. After viewing the 1917 film Our Navy in the early two-color additive color system, Prizma Color, Mannes and his friend Godowsky began experimenting with the use of colored filters and film, patenting a new process even before their high school graduation. They continued their experimentation and research while Mannes was studying physics and piano at Harvard and Godowsky was studying violin at UCLA. Eventually, with backing from an investor, the pair was able to convince Kodak of the value of their discoveries. In 1930, they moved to Kodak's Rochester headquarters, and within three years they developed the technique of three-color emulsion on which Kodachrome was based.

Kodachrome 16mm movie film was released for sale in 1935, and in 1936 Kodachrome 35mm still and 8mm movie film were released. To some Kodachrome was the best slide and movie film ever produced. Kodak produced the film and the chemicals required to develop Kodachrome from 1935 to 2009, by which time digital photography had, for the most part, replaced film photography.

According to the The New York Times, the last remaining roll of Kodachrome was developed on at Dwayne's Photo in Parsons, Kansas on December 30, 2010.

(This entry was last revised on 07-10-2014.)

View Map + Bookmark Entry

Chester Carlson invents Xerography; It Becomes Successful About 20 Years Later 1938 – 1949

In 1938 American physicist, inventor, and patent attorney Chester F. Carlson of Astoria, Queens, New York invented xerography, Originally called electrophotography, xerography did not become a commercial success until the wide adoption of the xerographic copier during the late 1950s.

In 1949 the Haloid Company of Rochester, New York introduced the Model A, the first commercial xerographic copier. Manually operated, it was also known as the Ox Box. An improved version, Camera #1, was introduced in 1950. The company renamed itself Haloid Xerox in 1958, and shortened its name to Xerox Corporation in 1961.

(This entry was last revised on 01-17-2015.)

View Map + Bookmark Entry

Otto Bettman Founds The Bettmann Archive: the Beginning of "The Visual Age" 1938

The Bettmann Archive, founded in New York in 1936 by Otto Bettmann, a refugee from Nazi Germany, contained 15,000 images by 1938.  Bettmann later characterized this period of time as "the beginning of the visual age." By 1980, the year before Bettmann sold the archive to the Kraus-Thomson Organization, the archive contained 2,000,000 images, carefully selected for their historical value, mainly under the five categories of world events, personalities, lifestyles, advertising art, and art and illustrations.

In 1984 the Kraus-Thomson Organization acquired the extensive United Press International (UPI) collection, containing millions of worldwide news and lifestyle photographs taken by photographers working for United Press International, International News Photos, Acme Newspictures, and Pacific and Atlantic.

In 1995 Corbis, a company controlled by Bill Gates, bought the Bettmann Archive.

"Beginning in 1997, Corbis spent five years selecting images of maximum historical value and saleability for digitization. More than 1.3 million images (26% of the collection) have been edited and 225,000 have been digitized. Because of this effort, more images from the Bettmann Archive are available now than ever before.

"In 2002, the Archive was moved to a state-of-the-art, sub-zero film preservation facility in western Pennsylvania. The 10,000-square-foot underground storage facility is environmentally-controlled, with specific conditions (minus -20°C, relative humidity of 35%) calculated to preserve prints, color transparencies, negatives, photographs, enclosures, and indexing systems" (http://www.corbis.com/BettMann100/Archive/Preservation.asp, accessed 01-17-2010).

View Map + Bookmark Entry

1940 – 1950

Sealing of the Crypt of Civilization May 25, 1940

On May 25, 1940 Presbyterian minister and president of Oglethorpe University in Brookhaven, GeorgiaThornwell Jacobs sealed the Oglethorpe Atlanta Crypt of Civilization in a cermony broadcast on Atlanta's WSB radio.  It was intended to be opened on May 28, 8113 CE.

Modelled after a chamber in an Egyptian pyramid, the Crypt of Civilization was a subterranean chamber, twenty feet long, ten feet wide, and ten feet high. Among the many elements of the time capsule were  microfilm media (film and thin metal) used to store written information, recorded sound, and moving pictures in the capsule. Apparently little or no print on paper material was included even though by the time of the creation of the capsule there was already sufficient evidence that print on paper, or writing on parchment, had survived for several thousand years, while microfilm or microform media was new and untested for durability.

"In this room had been a swimming pool, the foundation of which was impervious to water. The floor was raised with concrete with a heavy layer of damp proofing applied. The gallery's extended granite walls were lined with vitreous porcelain enamel embedded in pitch. The crypt had a two-foot thick stone floor and a stone roof seven feet thick. Jacobs consulted the Bureau of Standards in Washington for technical advice for storing the contents of the crypt. Inside would be sealed stainless steel receptacles with glass linings, filled with the inert gas of nitrogen to prevent oxidation or the aging process. A stainless steel door would seal the crypt."

"Articles on the crypt in the New York Times caught the attention of Thomas Kimmwood Peters (1884-1973), an inventor and photographer of versatile experience. Peters had been the only newsreel photographer to film the San Francisco earthquake of 1906. He had worked at Karnak and Luxor, Peters was also the inventor of the first microfilm camera using 35 millimeter film to photograph documents. In 1937 Jacobs appointed Peters as archivist of the crypt." 

"From 1937 to 1940, Peters and a staff of student assistants conducted an ambitious microfilming project. The cellulose acetate base film would be placed in hermetically sealed receptacles. Peters believed, based on the Bureau of Standards testing, that the scientifically stored film would last for six centuries; he took however, as a method of precaution, a duplicate metal film, thin as paper. Inside the crypt are microfilms of the greatest classics, including the Bible, the Koran, the Iliad, and Dante's Inferno. Producer David O. Selznick donated an original copy of the script of 'Gone With the Wind.' There are more than 640,000 pages of microfilm from over eight hundred works on the arts and sciences. Peters also used similar methods for capturing and for storing still and motion pictures. Voice recordings of political leaders such as Hitler, Stalin, Mussolini, Chamberlain, and Roosevelt were included, as were voice recordings of Popeye the Sailor and a champion hog caller. To view and to hear these picture and sound records, Peters placed in the vault electric machines, microreaders, and projectors. In the event that electricity would not be in use in 8113 A.D., there is in the crypt a generator operated by a windmill to drive the apparatus as well as a seven power magnifier to read the microbook records by hand. The first item one would see upon entering the chamber is a thoughtful precaution-a machine to teach the English language so that the works would be more readily decipherable if found by people of a strange tongue.

"Thornwell Jacobs envisioned the crypt as a synoptic compilation and thus aimed for a whole 'museum' of not only accumulated formal knowledge of over six thousand years, but also 1930s popular culture. The list of items in the crypt is seemingly endless. All of the items were donated, with contributors as diverse as King Gustav V of Sweden and the Eastman Kodak Company. Some of the more curious items Peters included in the crypt were plastic toys - a Donald Duck, the Lone Ranger, and a Negro doll, as well as a set of Lincoln Logs. Peters also arranged with Anheuser Busch for a specially sealed ampule of Budweiser beer. The chamber of the crypt when finally finished in the spring of 1940, resembled a cell of an Egyptian pyramid, cluttered with artifacts on shelves and on the floor" (http://www.oglethorpe.edu/about_us/crypt_of_civilization/history_of_the_crypt.asp, accessed 04-22-2011). 

View Map + Bookmark Entry

Using Microforms to Conserve Library Space 1944

In 1944 American writer, poet, editor, inventor, genealogist, librarian and director of Wesleyan's Olin Memorial Library Fremont Rider published The Scholar and the Future of the Research Library.

In this unusually well designed and produced book for its time Rider detailed the increasing shortage of space in research libraries, and described how his invention of the microcard, an opaque microform, would help to solve this problem. He also claimed that American research libraries were doubling in size every sixteen years—an assertion later proved incorrect.

View Map + Bookmark Entry

The First Phototypesetter 1947

In 1947 the Fotosetter, the first phototypesetter, was invented. The first phototypesetters were mechanical devices that replaced the metal type matrices with matrices carrying the image of the letters. They replaced the caster of hot metal typesetting machines with a photographic unit.

View Map + Bookmark Entry

Dennis Gabor Invents Holography 1947

In 1947 Hungarian electrical engineer and physicist Dennis Gabor, working at British Thomson-Houston, Rugby, England invented holography.

"Holography is a technique that allows the light scattered from an object to be recorded and later reconstructed so that it appears as if the object is in the same position relative to the recording medium as it was when recorded. The image changes as the position and orientation of the viewing system changes in exactly the same way as if the object was still present, thus making the recorded image (hologram) appear three dimensional. Holograms can also be made using other types of waves. The technique of holography can also be used to optically store, retrieve, and process information. While holography is commonly used to display static 3-D pictures, it is not yet possible to generate arbitrary scenes by a holographic volumetric display" (Wikipedia article on holography, accessed 04-26-2009).

View Map + Bookmark Entry

Edwin Land Demonstrates the Polaroid Land Camera Model 95, the First "Instant" Film Camera February 21, 1947

On February 21, 1947, American inventor Edwin H. Land, founder of Polaroid Corporation in Cambridge, Massachusetts, demonstrated an instant camera and associated film, called the Land Camera, Polaroid originally manufactured sixty units of this first camera, named the Polaroid Land Camera Model 95. It produced prints in about 1 minute. Fifty-seven were offered for sale at the Jordan Marsh department store in Boston before the 1948 Christmas holiday. Polaroid marketers guessed that the camera and film would remain in stock long enough to manufacture a second run based on customer demand, but all 57 cameras and all of the film were sold on the first day. 

As I recall, my mother purchased a Model 95 and used it to take pictures of our young family. It was very exciting and convenient to see the image almost instantly after it was taken, compared to waiting several days or weeks to have film developed and printed. Over the years I owned and used several different later models of the camera. The technology was, of course, superceded by digital photography, but, like its larger cousin Kodak, Polaroid was slow to realize the extent of the disruption, and the final Polaroid "instant" film camera, the Polaroid One 600, was designed as late as 2004, before Polaroid Corporation folded in 2007.

Here is an early, and funny, commercial for the camera. Beneath that is film in which Land speaks about his portable camera in 1970 and his philosophy of instant imaging, which would occur after his death, after the invention of digital photography, and the incorporation of digital cameras into cell phones.

View Map + Bookmark Entry

The First Published Photographs of the Earth Taken From Space April 1947

Photography from the V-2 rocket at altitudes ranging up to 160 kilometers, by T. A. Bergstrahl,  N. R. L. report no. R-3083, issued by the Naval Research Laboratory, Washington, D.C. in April 1947 in an edition of only 47 copies, contains the first published photographs of the earth taken from space. The photographs, which show a large portion of the American southwest, were taken from cameras mounted on a V-2 (V2) rocket launched from the proving ground at White Sands, New Mexico. The rocket, which bore the number 21 but was the 20th V-2 launched at White Sands after number 1 misfired, was one of over 60 V-2 rockets captured from the Germans at the end of World War II in 1945. At that time the German rocketry program at Peenemunde was at least 20 years ahead of any other program. As part of Project Paperclip, the United States government brought both the captured V-2s and over 100 German rocketry experts (headed by Wernher von Braun) to America, where they began what became the U. S. space program. In 1946 the Upper Atmosphere Research Panel (also known as the V-2 panel) was formed to oversee a program of high-altitude experiments conducted using the V-2 rockets. On October 24, 1946 the research team was able to obtain photographs of the Earth taken from 65 miles above the surface; however, perhaps for quality reasons, these photographs were not published until 1950 (see Newell, High Altitude Rocket Research p. 288).

Berghstrahl's report announced that photographs were taken from more than 100 miles above the earth. “On 7 March 1947 the twentieth V-2 to be launched in America took to the air from the Army Ordnance Proving Ground at White Sands, New Mexico. As on several of the previous flights, an attempt was made to obtain photographs of the features of interest on the rocket and, of course, of the earth. In this attempt the effort met with considerable success. Included among the group of pictures obtained are the first ever to be taken from altitudes greater than 160 kilometers (100 miles). The quality of the photographs is fairly good. For the first time, in pictures taken at such high altitudes, it is possible to recognize clearly many geographical features. In addition a large number and variety of cloud formations were recorded by the cameras and other information of meteorological value” (p. 1).

Photographs 11 and 12 are especially notable. Number 11 includes an overlay showing landmarks in New Mexico, Arizona and the Gulf of California. The caption to number 12 states that “this picture covers approximately 500,000 square miles of southwestern United States and northern Mexico. The photographs [making up the composite] do not match exactly due to the varying camera angles.” Newell, High Altitude Rocket Research (1953), pp. 284-288. Krause, “High altitude research with V-2 rockets,” Proceedings of the American Philosophical Society 91 (1947) 430-446. Reichhart, “The first photo from space,” Air & Space Magazine, Smithsonian Institution, 1 Nov. 2006.

According to Reichhart, photography from V-2s launched at White Sands began on October 24, 1946, and there was a Universal newsreel on the topic issued in November 1946. 

In 1947 the U.S. War Department produced a documentary film on the launching of V2 rockets from White Sands. The documentary excluded reference to photography done from the V-2 rockets:

View Map + Bookmark Entry

1950 – 1960

Rosalind Franklin's Photo #51 of Crystalline DNA May 2 – May 6, 1952

Between May 2 and May 6, 1952 English molecular biologist Rosalind Franklin, working at King's College, Cambridge took photograph No. 51 of the B-form of crystalline DNA. This was her finest photograph of the substance,  showing the characteristic X-shaped "Maltese cross" clearer than before. 

About eight months later, on January 26, 1953, Franklin showed this photograph to physicist and molecular biologist Maurice Wilkins. Four days later, on January 30, 1953 Wilkins showed the photograph to James Watson. 

The following day Watson asked laboratory director Lawrence Bragg if he could order model components from the Cavendish Laboratory machine shop. Bragg agreed. Watson's account of Franklin's photo 51 to Francis Crick confirmed that they had the vital statistics to build a B-form model: the photo confirmed the 20Å diameter, with a 3.4Å distance between bases. This, plus the repeat distance of 34Å, helix slope about 40°, and the likehood of 2 chains, not 3, seemed to be sufficient to build a model.

Franklin's file copy of Photograph 51, labeled in her handwriting, is preserved at the J. Craig Venter Institute.

View Map + Bookmark Entry

The Beginning of Positron Emission Tomography (PET) 1953

In 1953 William H. Sweet and Gordon L. Brownell at Massachusetts General Hospital, Boston, described the first positron imaging device, and and the first attempt to record three dimensional data in positron detection in their paper entitled "Localization of brain tumors with positron emitters',' Nucleonics XI (1953) 40-45. This was the beginning of positron emission tomography (PET).

"Despite the relatively crude nature of this imaging instrument, the brain images were markedly better than those obtained by other imaging devices. It also contained several features that were incorporated into future positron imaging devices. Data were obtained by translation of two opposed detectors using coincidence detection with mechanical motion in two dimensions and a printing mechanism to form a two-dimensional image of the positron source. This was our first attempt to record three-dimensional data in positron detection" (Brownell, A History of Positron Imaging [1999], accessed 12-25-2008)

View Map + Bookmark Entry

The Beginning of Medical Ultrasonography October 29, 1953

On October 29, 1953 Inge Edler and Carl Hellmuth Hertz at Lund University in Sweden obtained the first recording of the ultrasound echo from the heart. This was the beginning of echocardiography from which diagnostic sonography, or medical ultrasonography, evolved.

"The principle for echocardiography is as follows. The vibrations in a piezoelectric crystal create a beam of high frequency sound waves that are transmitted into the chest. When the waves pass an interface, such as between the heart wall and the surrounding area or the surface of a cardiac valve, some of the sound is reflected, creating an echo. The crystal is reset, enabling it to receive the echo. The longer it took for the echo to return to the crystal, the longer the distance between the crystal and the surface that was the source of the echo. The principle was the same as for sonar, used to measure the depth of water under a vessel, only in this case you measure the distance from the structure that is the source of the echo to the chest wall."

Edler, Inge & Hertz, Carl Hellmuth. The Use of the Ultrasonic Reflectoscope for Continuous Recording of the Movements of Heart Walls. K. Fysiogr. Sellsk. Lund. Foresch., 24 (1954) 1-19.

View Map + Bookmark Entry

Changes in Tissue Density Can be Computed 1956 – 1964

In work initiated at the University of Cape Town and Groote Schuur Hospital in early 1956, and continued briefly in mid-1957, South African-born American physicist Allen M. Cormack showed that changes in tissue density could be computed from x-ray data. His results were subsequently published in two papers:

"Representation of a Function by its Line Integrals, with Some Radiological Applications," Journal of Applied Physics 34 (1963) 2722-27; "Representation of a Function by its Line Integrals, with Some Radiological Applications. II," Journal of Applied Physics 35 (1964) 2908-13.  

Because of limitations in computing power no machine was constructed during the 1960s. Cormack's papers generated little interest until Godfrey Hounsfield and colleagues invented computed tomography, and built the first CT scanner in 1971, creating a real application of Cormack's theories.

View Map + Bookmark Entry

Beginning of Doppler Ultrasound 1957

In 1957 Shigeo Satomura  of the Institute of Scientific and Industrial Research, Osaka University, demonstrated the application of the Doppler shift in the frequency of ultrasound backscattered by moving cardiac structures.

This was the beginning of doppler ultrasound for evaluating blood flow and pressure by bouncing high-frequency sound waves (ultrasound) off red blood cells.

S. Satomura, "Ultrasonic Doppler Method for the Inspection of Cardiac Functions," J. Accoust. Soc. Amer. 29 (1957) 1181-85.

View Map + Bookmark Entry

Invention of the Image Scanner; Creation of the First Digital Image 1957

In 1957 Russell A. Kirsch and a team at the U.S. National Bureau of Standards, using the SEAC computer, built the first image scanner—a drum scanner. Using that device they took the first digital photograph: 

"The first image ever scanned on this machine was a 5 cm square photograph of Kirsch's then-three-month-old son, Walden. The black and white image had a resolution of 176 pixels on a side" (Wikipedia article on Image Scanner, accessed 04-01-2009).

View Map + Bookmark Entry

The First Obstetrical or Gynecological Sonograms 1958

In 1958 Ian Donald, Regius Professor of Midwifery at the University of Glasgow, and his colleagues John MacVicar, an obstetrician, and Tom Brown, an engineer, published a paper in The Lancet entitled "Investigation of Abdominal Masses by Pulsed Ultrasound." This article described their experience using an ultrasound scanner on 100 patients, and included 12 illustrations of various gynecologic disorders (eg, ovarian cysts, fibroids) as well as demonstration of obstetric findings such as the fetal skull at 34 weeks' gestation,"hydramnios" (polyhydramnios), and twins in breech presentation. The somewhat grainy and indistinct "Compound B-mode contact scanner" images were the first published obstetrical or gynecological sonograms.

J. M. Norman (ed),  Morton's Medical Bibliography 5th ed.(1991) no. 2682.

View Map + Bookmark Entry

The Corona Satellite Series: America's First Imagining Satellite Program June 1959 – May 31, 1972

In June 1959 KH-1, the first of the Corona series of American strategic imaging reconnaissance satellites was launched. Produced and operated by the Central Intelligence Agency Directorate of Science and Technology with assistance from the U.S. Air Force, the Corona satellites were used for photographic surveillance of the Soviet Union, the People's Republic of China and other areas. The 145th and last Corona satellite was launched on May 25, 1972 with its film recovered on May 31, 1972. Over its lifetime, CORONA provided photographic coverage totaling approximately 750,000,000 square miles of the earth’s surface.

"The Corona satellites used 31,500 feet (9,600 meters) of special 70 millimeter film with 24 inch (60 centimeter) focal length cameras. Initially orbiting at altitudes from 165 to 460 kilometers above the surface of the Earth, the cameras could resolve images on the ground down to 7.5 meters in diameter. The two KH-4 systems improved this resolution to 2.75 meters and 1.8 meters respectively, because they operated at lower orbital altitudes. . . .

"The first dozen or more Corona satellites and their launches were cloaked with disinformation as being part of a space technology development program called the Discoverer program. The first test launches for the Corona/Discoverer were carried out early in 1959. The first Corona launch containing a camera was carried out in June 1959 with the cover name Discoverer 4. This was a 750 kilogram satellite launched by a Thor-Agena rocket.

"The plan for the Corona program was for its satellites to return canisters of exposed film to the Earth in re-entry capsules, called by the slang term "film buckets", which were to be recovered in mid-air by a specially-equipped U.S. Air Force planes during their parachute descent. (The buckets were designed to float on the water for a short period of time for possible recovery by U.S. Navy ships, and then to sink if the recovery failed, via a water-dissolvable plug made of salt at the base of the capsule. This was for secrecy purposes.)" (Wikipedia article on Corona (satellite) accessed 11-29-2010).

"The return capsule of the Discoverer 13 mission, which launched August 10, 1960, was successfully recovered the next day. This was the first time that any object had been recovered successfully from orbit. After the mission of Discoverer 14, launch on August 18, 1960, its film bucket was successfully retrieved two days later by a C-119 Flying Boxcar transport plane. This was the first successful return of photographic film from orbit."

"CORONA enabled the US to specify verifiable terms of the Strategic Arms Limitation Treaty (SALT) with the Soviet Union in 1971. US negotiators confidently knew that photointerpreters could monitor changes in the size and characteristics of missile launchers, bombers, and submarines. Satellite imagery became the mainstay of the US arms-control verification process" (Central Intelligence Agency, CORONA: America's First Imaging Satellite Program, accessed 11-08-2014).

 

View Map + Bookmark Entry

The First Photograph of Earth from an Orbiting Satellite August 14, 1959

The first photograph of the earth from an orbiting satellite was taken by the U.S. Explorer 6 on August 14, 1959. The crude image shows a sun-lit area of the Central Pacific ocean and its cloud cover. The picture was made when the satellite was about 17,000 miles above the surface of the earth on August 14, 1959. At the time, the satellite was crossing Mexico. The signals were received at the tracking station at South Point, Hawaii (also known as Ka Lae).

(This entry was last revised on 11-08-2014.)

View Map + Bookmark Entry

The Xerox 914 September 16, 1959

Xerox 914.

On September 16, 1959 Haloid Xerox, Rochester, New York, introduced the Xerox 914, the first successful commercial plain paper xerographic copier, roughly the size of a desk.

". . .  commercial models were not available until March 1960. The first machine, delivered to a Pennsylvania metal-fastener maker, weighed nearly 650 pounds. It needed a carpenter to uncrate it, an employee with 'key operator' training, and its own 20-amp circuit. In an episode of Mad Men, set in 1962, the arrival of the hulking 914 helps get Peggy Olson her own office, after she tells her boss, 'It’s hard to do business and be credible when I’m sharing with a Xerox machine' " (http://www.theatlantic.com/magazine/archive/2010/07/the-mother-of-all-invention/8123/, accessed 06-11-2010).

View Map + Bookmark Entry

1960 – 1970

The TIROS 1 Satellite Transmits the First Television Picture from Space April 1, 1960

On April 1, 1960 the first Television InfraRed Observation Satellite (TIROS 1), the first successful low-Earth orbital weather satellite, was launched by NASA from Cape Canaveral, Florida. That day the satellite transmitted the first television picture of the earth from space.

View Map + Bookmark Entry

The First to Create Three-Dimensional Images of the Human Body Using a Computer 1964

"Boeing Man" or "Human Figure," a wireframe drawing printed on a Gerber Plotter.  It was used as a standard figure of a pilot.

In 1964 William A. Fetter, an art director at The Boeing Company in Seattle, Washington, supervised development of a  computer program that allowed him to create the first three-dimensional images of the human body through computer graphics. Using this program Fetter and his team produced the first computer model of a human figure for use in the study of aircraft cockpit design. It was called the “First Man” or "Boeing Man." Though Fetter's wire frame drawings could be called commercial art, they were of a high aesthetic standard.

Herzogenrath & Nierhoff-Wielk, Ex-Machina–Frühe Computergrafik bis 1979. Die Sammlunge Franke. . . . Ex-Machina– Early Computer Graphics up to 1979 (2007) 239.

View Map + Bookmark Entry

Bitzer & Willson Invent the First Plasma Video Display (Neon Orange) 1964

In 1964 Donald Bitzer, H. Gene Slottow, and Robert Willson at the University of Illinois at Urbana-Champaign invented the first plasma video display for the PLATO Computer System.

The display was monochrome neon orange and incorporated both memory and bitmapped graphics. Built by Owens-Illinois glass, the flat panels were marketed under the name "Digivue."

View Map + Bookmark Entry

Woodrow Bledsoe Originates of Automated Facial Recognition 1964 – 1966

From 1964 to 1966 Woodrow W. "Bledsoe, along with Helen Chan and Charles Bisson of Panoramic Research, Palo Alto, California, researched programming computers to recognize human faces (Bledsoe 1966a, 1966b; Bledsoe and Chan 1965). Because the funding was provided by an unnamed intelligence agency, little of the work was published. Given a large database of images—in effect, a book of mug shots—and a photograph, the problem was to select from the database a small set of records such that one of the image records matched the photograph. The success of the program could be measured in terms of the ratio of the answer list to the number of records in the database. Bledsoe (1966a) described the following difficulties:

" 'This recognition problem is made difficult by the great variability in head rotation and tilt, lighting intensity and angle, facial expression, aging, etc. Some other attempts at facial recognition by machine have allowed for little or no variability in these quantities. Yet the method of correlation (or pattern matching) of unprocessed optical data, which is often used by some researchers, is certain to fail in cases where the variability is great. In particular, the correlation is very low between two pictures of the same person with two different head rotations.'

"This project was labeled man-machine because the human extracted the coordinates of a set of features from the photographs, which were then used by the computer for recognition. Using a GRAFACON, or RAND TABLET, the operator would extract the coordinates of features such as the center of pupils, the inside corner of eyes, the outside corner of eyes, point of widows peak, and so on. From these coordinates, a list of 20 distances, such as width of mouth and width of eyes, pupil to pupil, were computed. These operators could process about 40 pictures an hour. When building the database, the name of the person in the photograph was associated with the list of computed distances and stored in the computer. In the recognition phase, the set of distances was compared with the corresponding distance for each photograph, yielding a distance between the photograph and the database record. The closest records are returned.

"This brief description is an oversimplification that fails in general because it is unlikely that any two pictures would match in head rotation, lean, tilt, and scale (distance from the camera). Thus, each set of distances is normalized to represent the face in a frontal orientation. To accomplish this normalization, the program first tries to determine the tilt, the lean, and the rotation. Then, using these angles, the computer undoes the effect of these transformations on the computed distances. To compute these angles, the computer must know the three-dimensional geometry of the head. Because the actual heads were unavailable, Bledsoe (1964) used a standard head derived from measurements on seven heads.

"After Bledsoe left PRI [Panoramic Research, Inc.] in 1966, this work was continued at the Stanford Research Institute, primarily by Peter Hart. In experiments performed on a database of over 2000 photographs, the computer consistently outperformed humans when presented with the same recognition tasks (Bledsoe 1968). Peter Hart (1996) enthusiastically recalled the project with the exclamation, 'It really worked!' " (Faculty Council, University of Texas at Austin, In Memoriam Woodrow W. Bledsoe, accessed 05-15-2009).

Bledsoe, W. W. 1964. The Model Method in Facial Recognition, Technical Report PRI 15, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W., and Chan, H. 1965. A Man-Machine Facial Recognition System-Some Preliminary Results, Technical Report PRI 19A, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W. 1966a. Man-Machine Facial Recognition: Report on a Large-Scale Experiment, Technical Report PRI 22, Panoramic Research, Inc., Palo Alto, California.

Bledsoe, W. W. 1966b. Some Results on Multicategory Patten Recognition. Journal of the Association for Computing Machinery 13(2):304-316.

Bledsoe, W. W. 1968. Semiautomatic Facial Recognition, Technical Report SRI Project 6693, Stanford Research Institute, Menlo Park, California.

View Map + Bookmark Entry

Aaron Klug Invents Digital Image Processing 1966

In 1966 English molecular biologist Aaron Klug at the University of Cambridge formulated a method for digital image processing of two-dimensional images.

A. Klug and D. J. de Rosier, “Optical filtering of electron micrographs: Reconstruction of one-sided images,” Nature 212 (1966): 29-32.

View Map + Bookmark Entry

Cyrus Levinthal Builds the First System for Interactive Display of Molecular Structures 1966

In 1966, using the Project MAC, an early time-sharing system at MIT, Cyrus Levinthal built the first system for the interactive display of molecular structures

"This program allowed the study of short-range interaction between atoms and the "online manipulation" of molecular structures. The display terminal (nicknamed Kluge) was a monochrome oscilloscope (figures 1 and 2), showing the structures in wireframe fashion (figures 3 and 4). Three-dimensional effect was achieved by having the structure rotate constantly on the screen. To compensate for any ambiguity as to the actual sense of the rotation, the rate of rotation could be controlled by globe-shaped device on which the user rested his/her hand (an ancestor of today's trackball). Technical details of this system were published in 1968 (Levinthal et al.). What could be the full potential of such a set-up was not completely settled at the time, but there was no doubt that it was paving the way for the future. Thus, this is the conclusion of Cyrus Levinthal's description of the system in Scientific American (p. 52):

It is too early to evaluate the usefulness of the man-computer combination in solving real problems of molecular biology. It does seems likely, however, that only with this combination can the investigator use his "chemical insight" in an effective way. We already know that we can use the computer to build and display models of large molecules and that this procedure can be very useful in helping us to understand how such molecules function. But it may still be a few years before we have learned just how useful it is for the investigator to be able to interact with the computer while the molecular model is being constructed.

"Shortly before his death in 1990, Cyrus Levinthal penned a short biographical account of his early work in molecular graphics. The text of this account can be found here."

In January 2014 two short films produced with the interactive molecular graphics and modeling system devised by Cyrus Levinthal and his collaborators in the mid-1960s was available at this link.

View Map + Bookmark Entry

NCR Issues the Smallest Published Edition of the Bible, and the First to Reach the Moon 1966

In 1966 the Research and Development department of National Cash Register (NCR) of Dayton, Ohio produced an edition of all 1245 pages of  the World Publishing Company's No. 715 Bible on a single 2" x 1-3/4" photochromatic microform (PCMI) The microform contained both the Old Testament on 773 pages and the New Testament on 746 pages, and was issued in a paper sleeve with title on the cover and information about the process inside and on the back.

On the microform each page of double column Bible text was about 0.5 mm wide and 1 mm high. Each text character was 8 um high (ie 8/1000ths of a millimeter). NCR noted on the paper wallet provided with the microform that this represented a linear reduction of about 250:1 or an area reduction of 62,500:1. This would correspond to the original text being circa 2 mm high. To put this into perspective, NCR also noted that if this reduction was used on the millions of books on the 270+ miles of shelving in the Library of Congress, the entire Library of Congress as it existed in 1966 could be stored in six standard filing cabinets.

♦ In 1971 Apollo 14 lunar module pilot Edgar D. Mitchell carried 100 of the microform bibles aboard the lunar module Antares, as confirmed by NASA's official manifest. Launched January 31, 1971, Mitchell and the bibles reached the Fra Mauro formation of the Moon on February 5 aboard the Antares before returning to the command module for the voyage back to Earth. This was the first edition of the Bible to reach the Moon, and probably the first book of any kind of reach the moon and return. A second parcel containing 200 microform Bibles flew in Edgar Mitchell's command module "PPK" bag in lunar orbit, and did not land. These 200 copies represented extra Bibles to be used if something happened to the lunar module copies.

View Map + Bookmark Entry

Stephen A. Benton Invents the Rainbow Hologram or Benton Hologram 1968

In 1968 Stephen A. Benton, then of Polaroid Corporation, and later at MIT's Media Lab, invented the Benton hologram or rainbow hologram, a hologram designed to be viewed under white light illumation rather than laser light, which was required to view holograms before this invention.  

"The rainbow holography recording process uses a horizontal slit to eliminate vertical parallax in the output image, greatly reducing spectral blur while preserving three-dimensionality for most observers. A viewer moving up or down in front of a rainbow hologram sees changing spectral colors rather than different vertical perspectives. Stereopsis and horizontal motion parallax, two relatively powerful cues to depth, are preserved. The holograms found on credit cards are examples of rainbow holograms" (Wikipedia article on rainbow hologram, accessed 11-23-2012).

View Map + Bookmark Entry

Aaron Klug Invents Three-Dimensional Image Processing January 1968

In January 1968 English molecular biologist Aaron Klug described techniques for the reconstruction of three-dimensional structures from electron micrographs, thus founding the processing of three-dimensional digital images.

D. J. de Rosier and A. Klug, “Reconstruction of three dimensional structures from electron micrographs,” Nature 217 (1968) 130-34.

View Map + Bookmark Entry

Cybernetic Serendipity: The First Widely-Attended International Exhibition of Computer Art August 2 – October 20, 1968

From August 2  to October 20, 1968 Cybernetic Serendipity: The Computer and the Arts was exhibited at the Institute of Contemporary Arts in London, curated by British art critic, editor, and Assistant Director of the Institute of Contemporary Arts Jasia Reichardt, at the suggestion of Max Bense. This was the first widely attended international exhibition of computer art, and the first exhibition to attempt to demonstrate all aspects of computer-aided creative activity: art, music, poetry, dance, sculpture, animation.

In the video below Jasia Reichardt introduced the exhibition:

"It drew together 325 participants from many countries; attendance figures reached somewhere between 45,000 and 60,000 (accounts differ) and it received wide and generally positive press coverage ranging from the Daily Mirror newspaper to the fashion magazine Vogue. A scaled-down version toured to the Corcoran Gallery in Washington DC and then the Exploratorium, the museum of science, art and human perception in San Francisco. It took Reichardt three years of fundraising, travelling and planning" (Mason, a computer in the art room. the origins of british computer arts 1950-80 [2008] 101-102)

For the catalogue of the show Reichardt edited a special issue of Studio International magazine, consisting of 100 pages with 300 images, publication of which coincided with the exhibition in 1968. The color frontispiece reproduced a color computer graphic by the American John C. Mott-Smith "made by time-lapse photography successively exposed through coloured filters, of an oscilloscope connected to a computer." The cover of the special issue was designed by the Polish-British painter, illustrator, film-maker, and stage designer Franciszka Themerson, incorporating computer graphics from the exhibition. Laid into copies of the special issue were 4 leaves entitled "Cybernetic Serendipity Music," each page providing a program for one of eight tapes of music played during the show. This information presumably was not available in time to be printed in the issue of Studio International.

Reichardt's Introduction  (p. 5) included the following:

"The exhibition is divided into three sections, and these sections are represented in the catalogue in a different order:

"1. Computer-generated graphics, computer-animated films, computer-composed and -played music, and computer poems and texts.

"2. Cybernetic devices as works of art, cybernetic enironments, remoted-control robots and painting machines.

"3. Machines demonstrating the uses of computers and an environment dealing with the history of cybernetics.

"Cybernetic Sernedipity deals with possibilites rather than achievements, and in this sense it is prematurely optimistic. There are no heroic claims to be made because computers have so far neither revolutionized music, nor art, nor poetry, the same way that they have revolutionized science.

"There are two main points which make this exhibition and this catalogue unusual in the contexts in which art exhibitions and catalogues are normally seen. The first is that no visitor to the exhibition, unless he reads all the notes relating to all the works, will know whether he is looking at something made by an artist, engineer, mathematician, or architect. Nor is it particularly important to know the background of all the makers of the various robots, machines and graphics- it will not alter their impact, although it might make us see them differently.

"The other point is more significant.

"New media, such as plastics, or new systems such as visual music notation and the parameters of concrete poetry, inevitably alter the shape of art, the characteristics of music, and content of poetry. New possibilities extend the range of expression of those creative poeple whom we identify as painters, film makers, composers and poets. It is very rare, however, that new media and new systems should bring in their wake new people to become involved in creative activity, be it composiing music drawing, constructing or writing.

"This has happened with the advent of computers. The engineers for whom the graphic plotter driven by a computer represented nothing more than a means of solving certain problems visually, have occasionally become so interested in the possibilities of this visual output, that they have started to make drawings which bear no practical application, and for which the only real motives are the desire to explore, and the sheer pelasure of seeing a drawing materialize. Thus people who would never have put pencil to paper, or brush to canvas, have started making images, both still and animated, which approximate and often look identical to what we call 'art' and put in public galleries.

"This is the most important single revelation of this exhibition." 

Some copies of the special issue were purchased by Motif Editions of London.  Those copies do not include the ICA logo on the upper cover and do not print the price of 25s. They also substitute two blanks for the two leaves of ads printed in the back of the regular issue. They do not include the separate 4 leaves of programs of computer music.  These special copies were sold by Motif Editions with a large  (75 x 52 cm) portfolio containing seven 30 x 20 inch color lithographs with a descriptive table of contents. The artists included Masao Komura/Makoto Ohtake/Koji Fujino (Computer Technique Group); Masao Komura/Kunio Yamanaka (Computer Technique Group); Maugham S. Mason, Boeing Computer Graphics; Kerry Starnd, Charles "Chuck" Csuri/James Shaffer & Donald K. Robbins/ The art works were titled respectively 'Running Cola is Africa', 'Return to Square', 'Maughanogram', 'Human Figure', 'The Snail', 'Random War' & '3D Checkerboard Pattern'.  Copies of the regular edition contained a full-page ad for the Motif Editions portfolio for sale at £5 plus postage or £1 plus postage for individual prints.

In 1969 Frederick A. Praeger Publishers of New York and Washington, DC issued a cloth-bound second edition of the Cybernetic Serendipity catalogue with a dust jacket design adapted from the original Studio International cover. It was priced $8.95. The American edition probably coincided with the exhibition of the material at the Corcoran Gallery in Washington. The Praeger edition included an index on p. 101, and no ads. Comparison of the text of the 1968 and 1969 editions shows that the 1969 edition contains numerous revisions and changes.

In 2005 Jasia Reichardt looked back on the exhibition with these comments:

"One of the journals dealing with the Computer and the Arts in the mid-sixties, was Computers and the Humanities. In September 1967, Leslie Mezei of the University of Toronto, opened his article on 'Computers and the Visual Arts' in the September issue, as follows: 'Although there is much interest in applying the computer to various areas of the visual arts, few real accomplishments have been recorded so far. Two of the causes for this lack of progress are technical difficulty of processing two-dimensional images and the complexity and expense of the equipment and the software. Still the current explosive growth in computer graphics and automatic picture processing technology are likely to have dramatic effects in this area in the next few years.' The development of picture processing technology took longer than Mezei had anticipated, partly because both the hardware and the software continued to be expensive. He also pointed out that most of the pictures in existence in 1967 were produced mainly as a hobby and he discussed the work of Michael Noll, Charles Csuri, Jack Citron, Frieder Nake, Georg Nees, and H.P. Paterson. All these names are familiar to us today as the pioneers of computer art history. Mezei himself too was a computer artist and produced series of images using maple leaf design and other national Canadian themes. Most of the computer art in 1967 was made with mechanical computer plotters, on CRT displays with a light pen or from scanned photographs. Mathematical equations that produced curves, lines or dots, and techniques to introduce randomness, all played their part in those early pictures. Art made with these techniques was instantaneously recognisable as having been produced either by mechanical means or with a program. It didn't actually look as if it had been done by hand. Then, and even now, most art made with the computer carries an indelible computer signature. The possibility of computer poetry and art was first mentioned in 1949. By the beginning of the 1950s it was a topic of conversation at universities and scientific establishments, and by the time computer graphics arrived on the scene, the artists were scientists, engineers, architects. Computer graphics were exhibited for the first time in 1965 in Germany and in America. 1965 was also the year when plans were laid for a show that later came to be called 'Cybernetic Serendipity' and presented at the ICA in London in 1968. It was the first exhibition to attempt to demonstrate all aspects of computer-aided creative activity: art, music, poetry, dance, sculpture, animation. The principal idea was to examine the role of cybernetics in contemporary arts. The exhibition included robots, poetry, music and painting machines, as well as all sorts of works where chance was an important ingredient. It was an intellectual exercise that became a spectacular exhibition in the summer of 1968" (http://www.medienkunstnetz.de/exhibitions/serendipity/images/1/, accessed 06-16-2012). This website reproduces photographs of the actual exhibition and a poster printed for the show.

View Map + Bookmark Entry

Willard Boyle & George Smith Develop the CCD, a Sensor for Recording Images 1969

Working at Bell Labs, in 1969 Willard Boyle and George E. Smith invented the charge-coupled device (CCD), a sensor for recording images.

Twenty years later, in 2009 Boyle and Smith shared half of the Nobel Prize in Physics "for the invention of an imaging semiconductor circuit – the CCD sensor." The Nobel Prize Committee prepared a report putting the discovery of the CCD in perspective. It may be accessed at http://nobelprize.org/nobel_prizes/physics/laureates/2009/phyadv09.pdf

"The lab [Bell Labs] was working on the picture phone and on the development of semiconductor bubble memory. Merging these two initiatives, Boyle and Smith conceived of the design of what they termed 'Charge "Bubble" Devices'. The essence of the design was the ability to transfer charge along the surface of a semiconductor. As the CCD started its life as a memory device, one could only "inject" charge into the device at an input register. However, it was immediately clear that the CCD could receive charge via the photoelectric effect and electronic images could be created. By 1969, Bell researchers were able to capture images with simple linear devices; thus the CCD was born. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild was the first with commercial devices and by 1974 had a linear 500 element device and a 2-D 100 x 100 pixel device. Under the leadership of Kazuo Iwama, Sony also started a big development effort on CCDs involving a significant investment. Eventually, Sony managed to mass produce CCDs for their camcorders. Before this happened, Iwama died in August 1982. Subsequently, a CCD chip was placed on his tombstone to acknowledge his contribution" (Wikipedia article on Charge-coupled device, accessed 10-06-2009).

View Map + Bookmark Entry

1970 – 1980

Godfrey Hounsfield Invents Computed Tomography (CT) 1971

In 1971 English electrical engineer Godfrey Hounsfield at EMI's Central Research Laboratories in Hayes, Middlesex, invented computed tomography (CT), the first application of computers to medical imaging.

View Map + Bookmark Entry

The First Patent for MRI March 17, 1972

On March 17, 1972 Armenian-American medical practitioner and inventor Raymond V. Damadian filed a patent for "An Apparatus and Method for Detecting Cancer in Tissue."

Damadian's patent 3,789,832 was granted on February 5, 1974. This was the first patent on the use of Nuclear Magnetic Resonance for scanning the human body, but it did not describe a method for generating pictures from such a scan, or precisely how such a scan might be achieved.

View Map + Bookmark Entry

The Beginnings of the Landsat Program July 23, 1972

On July 23, 1972 NASA launched the Earth Resources Technology Satellite, later renamed Landsat, for the acquisition of satellite imagery of Earth.

"The most recent [satellite in the series], Landsat 8, was launched on February 11, 2013. The instruments on the Landsat satellites have acquired millions of images. The images, archived in the United States and at Landsat receiving stations around the world, are a unique resource for global change research and applications in agriculturecartographygeologyforestryregional planningsurveillance and education" (Wikipedia article on Landsat Program, accessed 10-20-2013).

View Map + Bookmark Entry

One of the Most Widely Distributed Photographic Images: The Blue Marble Photograph of the Earth December 7, 1972

On December 7, 1972 Commander Eugene Cernan, Command Module Pilot Ronald Evans, and Lunar Module Pilot Harrison Schmitt on the Apollo 17 spacecraft took the the Blue Marble photograph of the earth from a distance of about 45,000 kilometers (28,000 miles). The image is one of the first to show a fully illuminated Earth, as the astronauts had the Sun behind them when they took the image.  To the astronauts Earth had the appearance of a glass marble. The photograph became one of the most widely distributed of all photographic images.

Apollo 17 was the eleventh and final manned mission in the United States Apollo space program. In 2012 it remained the most recent manned Moon landing and the most recent manned flight beyond low Earth orbit.

In January 2012 NASA released its 2012 version of the Blue Marble image. Using a planet-pointing satellite, Suomi NPP, the space agency created an extremely high-resolution photograph of our watery world. The Suomi satellite compiled the image from small sections that it photographed over the course of January 4, 2012. The pictures were later stitched together.

In July 2012 many technical details regarding the origins of the 1972 Blue Marble photo were available from Eric Hartwell's InfoDabble website.

View Map + Bookmark Entry

The Beginnings of Magnetic Resonance Imaging 1973

In 1973 American chemist Paul Lauterbur, working at the State University of New York at Stony Brook, developed a way to generate the first Magnetic Resonance Images (MRI), in 2D and 3D, using gradients. Lauterbur described an imaging technique that removed the usual resolution limits due to the wavelength of the imaging field. He used

"two fields: one interacting with the object under investigation, the other restricting this interaction to a small region. Rotation of the fields relative to the object produces a series of one-dimensional projections of the interacting regions, from which two- or three-dimensional images of their spatial distribution can be reconstructed" (http://www.nature.com/physics/looking-back/lauterbur/index.html, accessed 11-23-2008).

This was the beginning of magnetic reasonance imaging.

"When Lauterbur first submitted his paper with his discoveries to Nature, the paper was rejected by the editors of the journal. Lauterbur persisted and requested them to review it again, upon which time it was published and is now acknowledged as a classic Nature paper.  The Nature editors pointed out that the pictures accompanying the paper were too fuzzy, although they were the first images to show the difference between heavy water and ordinary water. Lauterbur said of the initial rejection: 'You could write the entire history of science in the last 50 years in terms of papers rejected by Science or Nature' (Wikipedia article on Paul Lauterbur, accessed 03-08-2012).

Lauterbur, Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance, Nature 242 (1973), 190–191.

♦ Lauterbur's Nobel Lecture is available from the Nobel website. You can also watch a 65 minute video of Lauterbur delivering the lecture from this link.

View Map + Bookmark Entry

Robert Ledley Develops the First Whole-Body CT Scanner 1973

In 1973 American dentist and biophysicist Robert S. Ledley of Georgetown University and colleagues developed the ACTA 0100 CT Scanner (Automatic Computerized Traverse Axial)— the first whole-body computed tomography scanner

"This machine had 30 photomultiplier tubes as detectors and completed a scan in only 9 translate/rotate cycles, much faster than the EMI-scanner. It used a DEC PDP11/34 minicomputer both to operate the servo-mechanisms and to acquire and process the images. The Pfizer drug company acquired the prototype from the university, along with rights to manufacture it. Pfizer then began making copies of the prototype, calling it the "200FS" (FS meaning Fast Scan), which were selling as fast as they could make them. This unit produced images in a 256x256 matrix, with much better definition than the EMI-Scanner's 80" (Wikipedia article on Computed Tomography, accessed 04-15-2009).

Ledley R. S., Di Chiro G, Luessenhop A. J., Twigg H. L. "Computerized transaxial x-ray tomography of the human body," Science 186, No. 4160 (1974) 207-212.

View Map + Bookmark Entry

Raymond Kurzweil Introduces the First Omni-Font Optical Character Recognition System 1974

In 1974 Raymond Kurzweil founded Kurzweil Computer Products, Inc. and developed the first omni-font optical character recognition system— a computer program capable of recognizing text printed in any normal font.

"Before that time, scanners had only been able to read text written in a few fonts. He decided that the best application of this technology would be to create a reading machine, which would allow blind people to understand written text by having a computer read it to them aloud. However, this device required the invention of two enabling technologies—the CCD [charge-coupled device] flatbed scanner and the text-to-speech synthesizer. Development of these technologies was completed at other institutions such as Bell Labs, and on January 13, 1976, the finished product was unveiled during a news conference headed by him and the leaders of the National Federation of the Blind. Called the Kurzweil Reading Machine, the device covered an entire tabletop" (Wikipedia article on Ray Kurzweil, accessed 03-08-2012).

View Map + Bookmark Entry

Invention of the Digital Camera December 1975

In December 1975 American electrical engineer Stephen J. Sasson of the Eastman Kodak Company invented the digital camera using a charge-coupled device.

"He [Sasson] set about constructing the digital circuitry from scratch, using oscilloscope measurements as a guide. There were no images to look at until the entire prototype — an 8-pound (3.6-kilogram), toaster-size contraption — was assembled. In December 1975, Sasson and his chief technician persuaded a lab assistant to pose for them. The black-and-white image, captured at a resolution of .01 megapixels (10,000 pixels), took 23 seconds to record onto a digital cassette tape and another 23 seconds to read off a playback unit onto a television. Then it popped up on the screen.

" 'You could see the silhouette of her hair,' Sasson said. But her face was a blur of static. She was less than happy with the photograph and left, saying 'You need work,' he said. But Sasson already knew the solution: reversing a set of wires, the assistant's face was restored" (Wikipedia article on Stephen J. Sasson, accessed 04-22-2009).

In 1978, Sasson and his supervisor Gareth A. Lloyd were issued United States Patent 4,131,919 for their digital camera.

There is an image of Sasson's digital camera at this link.

View Map + Bookmark Entry

The First Commercially Available Laser Printer 1976

In 1976 IBM introduced the IBM 3800, the first commercially available laser printer for use with its mainframes. This "room-sized" machine was the first printer to combine laser technology and electrophotography. The technology speeded the printing of bank statements, premium notices, and other high-volume documents. Supplied only as a peripheral for IBM machines, the machine was not available separately.

View Map + Bookmark Entry

First Print-to-Speech Reading Machine 1976

In 1976 Raymond Kurzweil introduced the Kurzweil Reading Machine, the first practical application of OCR technology. The Kurzweil Reading Machine combined omni-font OCR, a flat-bed scanner, and text-to-speech synthesis to create the first print-to-speech reading machine for the blind. It was the first computer to transform random text into computer-spoken words, enabling blind and visually impaired people to read any printed materials. 

View Map + Bookmark Entry

Making MRI Feasible 1977

In 1977 British physicist Peter Mansfield developed a mathematical technique that would allow NMR scans to take seconds rather than hours and produce clearer images than the technique Paul Lauterbur developed in 1973.

Mansfield showed how gradients in the magnetic field could be mathematically analysed, which made it possible to develop a useful nuclear magnetic resonance imaging technique. Mansfield also showed how extremely fast imaging could be achievable. This became technically possible a decade later.

P. Mansfield and A. A .Maudsley, "Medical imaging by NMR", Brit. J. Radiol. 50 (1977) 188.
P Mansfield, "Multi-planar imaging formation using NMR spin echoes," J. Physics C. Solid State Phys. 10 (1977) L55–L58.

The references are from Mansfield's Nobel Lecture. In December 2013 a 64 minute video of Mansfield delivering his lecture was available at this link.

View Map + Bookmark Entry

Early Interactive Computing and Virtual Reality 1978 – 1979

The term hypermedia is used as a logical extension of the term hypertext in which graphics, audio, video, plain text and hyperlinks intertwine to create a medium of information that is generally unlinear. Funded by ARPA, The Aspen Movie Map  was an early hypermedia project produced in 1978-79 by the Architecture Machine Group (ARC MAC) at MIT under the direction of Andrew Lippman. It allowed the user to take a virtual tour through the city of Aspen, Colorado

"ARPA funding during the late 1970s was subject to the military application requirements of the notorious Mansfield Amendment introduced by Mike Mansfield (which had severely limited funding for hypertext researchers like Douglas Engelbart).

"The Aspen Movie Map's military application was to solve the problem of quickly familiarizing soldiers with new territory. The Department of Defense had been deeply impressed by the success of Operation Entebbe in 1976, where the Israeli commandos had quickly built a crude replica of the airport and practiced in it before attacking the real thing. DOD hoped that the Movie Map would show the way to a future where computers could instantly create a three-dimensional simulation of a hostile environment at much lower cost and in less time (see virtual reality).

"While the Movie Map has been referred to as an early example of interactive video, it is perhaps more accurate to describe it as a pioneering example of interactive computing. Video, audio, still images, and metadata were retrieved from a database and assembled on the fly by the computer (an Interdata minicomputer running the MagicSix operating system) redirecting its actions based upon user input; video was the principle, but not sole affordance of the interaction" (Wikipedia article on Aspen Movie Map, accessed 04-16-2009).

View Map + Bookmark Entry

1980 – 1990

Flexible Image Transport System (FITS) 1981

D. C. Wells, E. W. Greisen, and R. H. Harten developed the open source FITS (Flexible Image Transport System), which was first standardized in 1981. It is

"a digital file format used to store, transmit, and manipulate scientific and other images. FITS is the most commonly used digital file format in astronomy. Unlike many image formats, FITS is designed specifically for scientific data and hence includes many provisions for describing photometric and spatial calibration information, together with image origin metadata.

"A major feature of the FITS format is that image metadata is stored in a human readable ASCII header, so that an interested user can examine the headers to investigate a file of unknown provenance. Each FITS file consists of one or more headers containing ASCII card images (80 character fixed-length strings) that carry keyword/value pairs, interleaved between data blocks. The keyword/value pairs provide information such as size, origin, coordinates, binary data format, free-form comments, history of the data, and anything else the creator desires: while many keywords are reserved for FITS use, the standard allows arbitrary use of the rest of the name-space" (Wikipedia article on FITS, accessed 03-24-2010).

Because of its special features FITS became a very useful format for the long term preservation of digital images. It was also adopted by NASA as a standard, and was also adopted by the Vatican Library.

View Map + Bookmark Entry

The First Commercial Electronic Camera--Not Digital August 1981 – 1997

In August 1981 Sony announced the first commercial electronic camera, the Sony Mavica (Magnetic Video Camera). Not a digital camera, it was actually a video camera that took video freeze-frames.

Sony's first commercially marketed digital camera was the Sony Digital Mavica MVC-FD5 (1997).

View Map + Bookmark Entry

The First Scanner? November 1982

In November 1982 IBM introduced the Scanmaster 1, a mainframe computer terminal designed to scan, transmit and store images of documents electronically.

View Map + Bookmark Entry

Among the Earliest Practical Digital Libraries 1985

In 1985 an IBM team began scanning the papers related to Columbus' discovery of the new world at El Archivo General de Indias de Sevilla (AGI), Seville, Spain.

"To coincide with the 500th anniversary of Columbus' landfall in the West Indies, the AGI project was to capture 10% of the collection estimated to consist of 86,000,000 pages. By 1992, it had indeed collected about 9,000,000 digital image pages onto optical disks, together with a set of finding aids." This was among the earliest practical digital libraries.

View Map + Bookmark Entry

The First Map of the Functioning Structure of an Entire Brain November 12, 1986

On November 12, 1986 J. G. White, E. Southgate, J. N. Thomson and S[idney] Brenner published "The Structure of the nervous System of the Nematode Caenorhabditis elegans," Philosophical Transactions B: Biological Sciences, 314 (1986) no. 1165, 1-340. The first map of the functioning structure of an entire brain at the cellular level, this paper has been called the beginning of connectomics.

"The structure and connectivity of the nervous system of the nematode Caenorhabditis elegans has been deduced from reconstructions of electron micrographs of serial sections. The hermaphrodite nervous system has a total complement of 302 neurons, which are arranged in an essentially invariant structure. Neurons with similar morphologies and connectivities have been grouped together into classes; there are 118 such classes. Neurons have simple morphologies with few, if any, branches. Processes from neurons run in defined positions within bundles of parallel processes, synaptic connections being made en passant. Process bundles are arranged longitudinally and circumferentially and are often adjacent to ridges of hypodermis. Neurons are generally highly locally connected, making synaptic connections with many of their neighbours. Muscle cells have arms that run out to process bundles containing motoneuron axons. Here they receive their synaptic input in defined regions along the surface of the bundles, where motoneuron axons reside. Most of the morphologically identifiable synaptic connections in a typical animal are described. These consist of about 5000 chemical synapses, 2000 neuromuscular junctions and 600 gap junctions" (Abstract).

View Map + Bookmark Entry

The First Digital Image Database of Cultural Materials 1987

To photograph, store, and organize the art work of the painter, Andrew Wyeth in Chadds Ford, Pennsylvania, in 1987 Fred Mintzer, Henry Gladney and colleagues at IBM developed a high resolution digital camera for photographing art works and a PC-based database system to store and index the images. The system was used by Wyeth's staff to photograph, store, and organize about 10,000 images. "Pictures were scanned at a spatial resolution of 2500 by 3000 pixels and a color depth of 24 bits-per-pixel, and were color calibrated." This was the first digital image database of cultural materials.

View Map + Bookmark Entry

The Origins of Adobe Photoshop 1987 – February 1990

In 1987, American software engineer Thomas Knoll, a PhD student at the University of Michigan, began writing a program on his Macintosh Plus to display grayscale images on a monochrome display. This program, which he called Display, caught the attention of his brother John Knoll, an employee at Industrial Light & Magic, who urged Thomas to turn Display into a fully-fledged image editing program. Thomas took a six-month break from his studies in 1988 to collaborate with John on the program, after which Thomas renamed the program ImagePro. But since the name ImagePro was already taken, Thomas renamed the program Photoshop, and worked out a short-term deal with scanner manufacturer Barneyscan to distribute copies of the program with a slide scanner.  Roughly 200 copies were shipped under that arrangement.  

During this time, John Knoll gave a demonstration of the program to engineers at Apple in Cupertino, and to Russell Brown, art director at Adobe Systems in San Jose. In September 1988 Adobe decided to purchase the license to distribute. In February 1990 Adobe releated Photoshop 1.0 for the Macintosh. Adobe photograph became the de facto industry standard in raster graphics editing.

View Map + Bookmark Entry

The National Center for Biotechnology Information is Founded November 4, 1988

Recognizing the importance of computerized information processing methods for the conduct of biomedical research, on November 4, 1988 Senator and Representative Claude Pepper sponsored legislation that established the National Center for Biotechnology Information (NCBI) as a division of the National Library of Medicine (NLM), Bethesda, Maryland. NLM was chosen for its experience in creating and maintaining biomedical databases, and because as part of NIH, it could establish an intramural research program in computational molecular biology. 

View Map + Bookmark Entry

The First Holographic Video Display 1989

MIT's Media Lab developed the first holographic video display in 1989. The volume of the hologram was just 25 cubic millimeters, smaller than a thimble.

View Map + Bookmark Entry

1990 – 2000

The First Magnetic Resonance Image of Human Brain Function August – November 1, 1991

In August 1991 John (Jack) Belliveau, a scientist at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital in Boston, presented the first unambiguous images of human brain activity changes observed with magnetic resonance (MR) at the 10th annual meeting of the Society for Magnetic Resonance in Medicine in San Francisco. "Using dynamic susceptibility contrast (DSC) MRI with a gadolinium-based Gd-DPTA contrast agent, Belliveau mapped the changes in cerebral blood volume (CBV) following neural activation in a subject responding to a simple visual stimulus."

On November 1, 1991, the paper "Functional mapping of the human visual cortex by magnetic resonance imaging," by Dr. Belliveau and colleagues appeared in Science, 254, No. 5032, 716-9. On the cover of the issue was an artist's rendering of an image showing a human head, seen from behind, with a disc of the skull removed, the exposed visual cortex registering a squiggle of activity.

View Map + Bookmark Entry

The First Image Posted to the Web 1992

The first image posted to the web was a photograph of a CERN singing group called Les Horribles Cernettes posted in 1992.

View Map + Bookmark Entry

First Library of Digital Images on the Internet 1993

In 1993 Fred Mintzer and colleagues at IBM photographed and developed a database of about 20,000 digital images for the Vatican Library. This was the first library of digital images on the Internet.

View Map + Bookmark Entry

The Electronic Beowulf 1993

In 1993 the British Library and Kevin S. Kiernan at the University of Kentucky embarked on the Electronic Beowulf project, an effort to photograph and publish high resolution electronic copies of the manuscript. The Electronic Beowulf was a pioneering effort in the digital preservation, restoration, and dissemination of manuscript material.

"The equipment we are using to capture the images is the Roche/Kontron ProgRes 3012 digital camera, which can scan any text, from a letter or a word to an entire page, at 2000 x 3000 pixels in 24-bit color. The resulting images at this maximum resolution are enormous, about 21-25 MB, and tax the capabilities of the biggest machines. Three or four images - three or four letters or words if that is what we are scanning - will fill up an 88 MB hard disk, and we have found that no single image of this size can be processed in real time without at least 64 MB of RAM. In our first experiments in June with the camera and its dedicated hardware, we transmitted a half-dozen images by phone line from the Conservation Studio of the British Library to the Wenner Gren Imaging Laboratory at the University of Kentucky, where identical hardware was set up to receive the data. Most of these images are now available on the Internet through anonymous ftp or Mosaic."

View Map + Bookmark Entry

The Mosaic Web Browser March 4, 1993

On March 4, 1993 Marc Andreesen of the Software Development Group, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign announced on Usenet the creation of the NCSA Mosaic browser 0.10, and the introduction of the image tag.

View Map + Bookmark Entry

The First Digital Offset Press July 1993

In July 1993 Benny Landa of Indigo in Rehovot, Israel introduced the Indigo E-Print 1000 digital offset press, incorporating ElectroInk technology, also called ink-based electrophotography. The E-Print 1000 was the first digital offset press.

View Map + Bookmark Entry

FishCam: The Oldest Nearly Continuously Operational Webcam 1994

While working on the Netscape web browser in 1994, Louis J. "Lou" Montulli II built the Fishcam, one of the earliest live-image websites. Netscape hosted the Fishcam until long after they were no longer Netscape. After a short hiatus, in 2009 it found a new host.  When this note was written in May 2009 the Fishcam was operational and remained  one of the longest nearly continuously running live websites.

View Map + Bookmark Entry

First Consumer-Priced Digital Camera February 17, 1994

On February 17, 1994 Apple introduced the first consumer-priced digital camera that worked with a personal computer—the QuickTake 100.

View Map + Bookmark Entry

The First Full-Time Online Webcam Girl April 1996 – 2003

In April 1996, during her junior year at Dickinson College in Carlisle, Pennsylvania, Internet personality and lifecaster Jennifer Ringley began the popular website, JenniCam. She was the first real full time online webcam girl.

"Previously, live webcams transmitted static shots from cameras aimed through windows or at coffee pots. Ringley's innovation was simply to allow others to view her daily activities.

"In June 2008, CNET hailed JenniCam as one of the greatest defunct websites in history.

"Regarded by some as a conceptual artist, Ringley viewed her site as a straight-forward document of her life. She did not wish to filter the events that were shown on her camera, so sometimes she was shown nude or engaging in sexual behavior, including sexual intercourse and masturbation. This was a new use of Internet technology in 1996 and viewers were stimulated both for its sociological implications and for sexual arousal. Surveillance became conceptual art, as noted by Mark Tribe in 'New Media Art':

In Web sites like JenniCAM, in which a young woman installed Web cameras in her home to expose her everyday actions to online viewers. . . surveillance became a source of voyeuristic and exhibitionistic excitement. . . Institutional surveillance and the invasion of privacy have been widely explored by New Media artists.'

"Ringley's genuine desires to maintain the purity of the cam-eye view of her life eventually created the need to establish that she was within her rights as an adult to broadcast such information, in the legal sense, and that it was not harmful to other adults. Unlike later for-profit webcam services, Ringley did not spend her day displaying her private parts, and she spent much more time discussing her romantic life than she did her sex life. Ringley maintained her webcam site for seven years" (Wikipedia article on Jennifer Ringley, accessed 05-08-2009).

View Map + Bookmark Entry

The Digital Michelangelo Project 1998

Marc Levoy and team began The Digital Michelangelo Project at Stanford University in 1998 using laser scanners to digitize the statues of Michelangelo, as well as 1,163 fragments of the Forma Urbis Romae, a giant marble map of ancient Rome.

The quality of the scans was so high that the Italian government would not permit the release of the full data set on the Internet; however, the Stanford researchers built a system called ScanView that allowed viewing of details of specific parts of the statue, including parts that would be inaccessible to a normal museum visitor. In December 2013 Scanview could be downloaded at this link.

The laser scan data for Michelangelo's David was utilized in its cleaning and restoration that began in September 2002. This eventually resulted in a 2004 book entitled Exploring David: Diagnostic Tests and State of Conservation.

"In preparation for this restoration, the Galleria dell'Accademia undertook an ambitious 10-year program of scientific study of the statue and its condition. Led by Professor Mauro Matteini of CNR-ICVBC, a team of Italian scientists studied every inch of the statue using color photography, radiography (i.e. X-rays), ultraviolet fluorescence and thermographic imaging, and several other modalities. In addition, by scraping off microsamples and performing in-situ analyses, the mineralogy and chemistry of the statue and its contaminants were characterized. Finally, finite element structural analyses were performed to determine the origin of hairline cracks that are visible on his ankles and the tree stump, to decide if intervention was necessary. (They decided it wasn't; these cracks arose in 1871, when the statue briefly tilted forward 3 degrees due to settling of the ground in the Piazza Signoria. This tilt was one of the reasons they moved the statue to the Galleria dell'Accademia.)  

"The results of this diagnostic campaign are summarized in the book Exploring David . . . . The book, written in English, also contains a history of the statue and its past restorations, a visual analysis of the chisel marks of Michelangelo as evident from the statue surface, and an essay by museum director Franca Falletti on the difficulties of restoring famous artworks. . . .  

"Aside from its sweeping scientific vision, what is remarkable about this book is that many of the studies employed a three-dimensional computer model of the statue - the model created by us during the Digital Michelangelo Project. Although we worked hard to create this model, and we envisioned 3D models eventually being used to support art conservation, we did not expect such uses to become practical so soon. After all, our model of the David is huge; outside our laboratory and a few others in the computer graphics field, little software exists that can manipulate such large models. However, with help from Roberto Scopigno and his team at CNR-Pisa, museum director Franca Falletti prodded, encouraged, and cajoled the scientists working under her direction to use our model wherever possible. We contributed a chapter to this book, on the scanning of the statue, but we take no credit for its use in the rest of the book. In fact, to us at Stanford University, the timing of our scanning project relative to the statue's restoration and the creation of this book seems merely fortuitious. However, Falletti insists that she had this use of our model in mind all along! In any case, this is a landmark book - the most extensive use that has ever been made of a 3D computer model in an art conservation project" (http://graphics.stanford.edu/projects/mich/book/book.html, accessed 12-23-2009).

On July 21, 2009 the team announced that they had a "full-resolution (1/4mm) 3D model of Michelangelo's 5-meter statue of David", containing "about 1 billion polygons."

View Map + Bookmark Entry

2000 – 2005

Origins of Google Earth 2001

The KH-4B Corona Reconaissance Satellite

The prehistory of Google Earth began in 2001 when a software development firm called Keyhole, Inc., was founded in Mountain View, California, which happened also to be Google's base of operations. Keyhole specialized in geospatial data visualization applications. The name "Keyhole" paid homage to the original KH reconnaissance satellites, also known as Corona satellites, which were operated by the U.S. between 1959 and 1972. Google acquired Keyhole in 2004, and Keyhole's Earth Viewer reached a wide public as Google Earth in 2005. Other aspects of Keyhole technology were incorporated into Google Maps. 

View Map + Bookmark Entry

The World's Largest Book --Spectacularly Beautiful December 2003

Michael Hawley

Michael Hawley reviews his book, Bhutan: a Visual Odyssey Across the Kingdom

Choki Lhamo (age 14) from Trongsa with BHUTAN, the world's largest published book

Jay Talbott / National Geographic Society

In December 2003Michael Hawley, a scientist at MIT, issued the world's largest book—Bhutan: a Visual Odyssey Across the Kingdom. The work, which was also one of the most beautiful books ever published, was undertaken as a philanthrophic endeavor. It had 112 pages and weighed 133 pounds on an included custom-built aluminum stand. It's page openings were 7 x 5 feet. The work was initially offered in exchange for a $10,000 contribution. However, in November 2008 Amazon.com was offering copies for sale for $30,000 each.

A more practical and affordable way to appreciate this spectacular volume would be the trade edition published in 2004, a copy of which I acquired. In February 2009 this was offered for sale by Amazon.com for $100.00. In my opinion this is one of the finest and most spectacular trade books designed, printed and bound in America, though my aging eyes are not entirely comfortable reading white text against a black background. The clothbound volume, with an unusual dust jacket printed on both sides, measures 15¼ x 12¼ inches (39 x 31 cm).

View Map + Bookmark Entry

Flickr, the Photo & Video Sharing Social Networking Site, is Launched February 2004

The Flickr logo

The Flickr homepage interface

In February 2004 Flickr, the photo and video sharing and photo and video social networking site, was launched by Ludicorp, a Vancouver, Canada, based company founded by Stewart Butterfield and Caterina Fake. It emerged out of tools originally created for Ludicorp's Game Neverending, a web-based massively multiplayer online game. Its organizational  tools allowed photos to be tagged and browsed by folksonomic means.

Ludicorp and Flickr were purchased by Yahoo in March 2005.

"Yahoo reported in June 2011 that Flickr had a total of 51 million registered members and 80 million unique visitors. In August 2011 the site reported that it was hosting more than 6 billion images and this number continues to grow steadily according to reporting sources." (Wikipedia article on Flickr, accessed 03-23-2012).

View Map + Bookmark Entry

Image Manipulation in Scientific Publications July 6, 2004

An issue of the Journal of Cell Biology

On July 6, 2004 The Journal of Cell Biology began screening digital images submitted with electronic manuscripts to determine whether these images were manipulated in ways that misrepresented experimental results. The image-screening system that checked for image manipulation took 30 minutes per paper.

View Map + Bookmark Entry

2005 – 2010

Google Earth is Launched 2005

An image of earth using the Google Earth program

The Google Earth logo

Keyhole EarthViewer 3D

In 2005 Google launched Google Earth, a virtual globe, map and geographical information program, which mapped the Earth by the superimposition of images obtained by satellite. The program, which Google acquired when it purchased Keyhole, Inc., was originally called EarthViewer 3D. 

View Map + Bookmark Entry

The "Selfie" Social Media Phenomenon Circa 2005

"In the early 2000s, before Facebook became the dominant online social network, self-taken photographs were particularly common on MySpace. However, writer Kate Losse recounts that between 2006 and 2009 (when Facebook became more popular than MySpace), the "MySpace pic" (typically "an amateurish, flash-blinded self-portrait, often taken in front of a bathroom mirror") became an indication of bad taste for users of the newer Facebook social network. Early Facebook portraits, in contrast, were usually well-focused and more formal, taken by others from distance. In 2009 in the image hosting and video hosting website Flickr, Flickr users used 'selfies' to describe seemingly endless self-portraits posted by teenage girls. According to Losse, improvements in design—especially the front-facing camera copied by the iPhone 4 (2010) from Korean and Japanese mobile phones, mobile photo apps such as Instagram, and selfie sites such as ItisMee—led to the resurgence of selfies in the early 2010s.

"Initially popular with young people, selfies gained wider popularity over time. By the end of 2012, Time magazine considered selfie one of the "top 10 buzzwords" of that year; although selfies had existed long before, it was in 2012 that the term "really hit the big time". According to a 2013 survey, two-thirds of Australian women age 18–35 take selfies—the most common purpose for which is posting on Facebook. A poll commissioned by smartphone and camera maker Samsung found that selfies make up 30% of the photos taken by people aged 18–24.

"By 2013, the word "selfie" had become commonplace enough to be monitored for inclusion in the online version of the Oxford English Dictionary. In November 2013, the word "selfie" was announced as being the "word of the year" by the Oxford English Dictionary, which gave the word itself an Australian origin.

"Selfies have also taken beyond the earth. A space selfie is a selfie that is taken in space. This include selfies taken by astronauts, machines and by an indirect method to have self-portrait photograph on earth retaken in space" (Wikipedia article on Selfie, accessed 02-27-2014).

View Map + Bookmark Entry

Connectomes: Elements of Connections Forming the Human Brain September 30, 2005

Olaf Sporns

Giulio Tononi

Neuroscientists Olaf Sporns of Indiana University, Giulio Tononi of the University of Wisconsin, and Rolf Köttler of Heinrich Heine University, Düsseldorf, Germany, published "The Human Connectome: A Structural Description of the Human Brain," PLoS Computational Biology I (4). This paper and the PhD thesis of Patric Hagmann from the Université de Lausanne, From diffusion MRI to brain connectomics, coined the term connectome:

In their 2005 paper  Sporns et al. wrote:

"To understand the functioning of a network, one must know its elements and their interconnections. The purpose of this article is to discuss research strategies aimed at a comprehensive structural description of the network of elements and connections forming the human brain. We propose to call this dataset the human 'connectome,' and we argue that it is fundamentally important in cognitive neuroscience and neuropsychology. The connectome will significantly increase our understanding of how functional brain states emerge from their underlying structural substrate, and will provide new mechanistic insights into how brain function is affected if this structural substrate is disrupted."

In his 2005 Ph.D. thesis, From diffusion MRI to brain connectomics, Hagmann wrote:

"It is clear that, like the genome, which is much more than just a juxtaposition of genes, the set of all neuronal connections in the brain is much more than the sum of their individual components. The genome is an entity it-self, as it is from the subtle gene interaction that [life] emerges. In a similar manner, one could consider the brain connectome, set of all neuronal connections, as one single entity, thus emphasizing the fact that the huge brain neuronal communication capacity and computational power critically relies on this subtle and incredibly complex connectivity architecture" (Wikipedia article on Connectome, accessed 12-28-2010).

View Map + Bookmark Entry

Pixar at MOMA December 14, 2005

The Pixar logo

A poster for Pixar at the Moma

The Moma

On December 14, 2005 the Museum of Modern Art (MoMA), New York, opened PIXAR: 20 Years of Animation:

"The Most Extensive Gallery Exhibition that MoMA has ever devoted to Animation along with a Retrospective of Pixar Features and Shorts."

Notably MoMA found it unnecessary to characterize the exhibition as "computer animation" since by this time virtually all animation was done by computer. They published a 175 page printed catalogue of the exhibition.

View Map + Bookmark Entry

Disney Acquires Pixar January 24, 2006

The Pixar version of the Disney logo, used in Pixar movies

The Pixar logo, including a number of popular Pixar characters

Steve Jobs

The Walt Disney Company, born in the days of manual animation, acquired Pixar, the computer animation company, making Steve Jobs the largest Disney stockholder.

View Map + Bookmark Entry

92% of Cameras Sold are Digital February 2006

The Canon A530, considered by many to be one of the best digital cameras available in 2006

By some estimates 92 percent of all cameras sold in 2006 were digital.

View Map + Bookmark Entry

Yahoo and Reuters Found "YouWitnessNews" December 5, 2006

The Reuters logo

The You Witness News logo

On December 5, 2006 Yahoo and Reuters introduced programs to place photographs and videos of news events submitted by the public, including cell phone photos and videos, throughout Reuters.com and Yahoo's new service entitled YouWitnessNews. Reuters said that it in 2007 would also start to distribute some of the submissions to the thousands of print, online and broadcast media outlets that subscribed to its news service. Reuters also said that it hoped to develop a service devoted entirely to user-submitted photographs and video.

View Map + Bookmark Entry

Photosynth Demonstrated March 2007

Blaise Agüera y Arcas

A Seadragon browser demo

The Photosynth interface

In March 2007 physicist and software engineer Blaise Agüera y Arcas, architect of Seadragon, and co-creator of Photosynth, demonstrated Photosynth in a video dowloadable at the TED website at this link.

Using techniques of computational bibliography, in collaboration with Paul Needham at Princeton's Scheide Library, Agüera y Arcas also did significant original research in the technology of the earliest printing from movable type.

 

 

View Map + Bookmark Entry

Google Introduces Street View in Google Maps May 25, 2007 – May 12, 2008

Google Street View image of St Johns Street in Manchester UK showing 8 different possible views

An exmple of blurred faces in Google Street View

One of the vehicles used to record the images for Google Street View

On May 25, 2007 Google introduced the Street View feature of Google Maps in the United States.  It provided panoramic views from positions along many streets, eventually including even views of the very small road on which I live in Novato, California, suggesting that coverage of many parts of the United States became extremely comprehensive.  

On April 16, 2008, Google fully integrated Street View into Google Earth 4.3.

In response to complaints about privacy, on May 12, 2008 Google announced in its "latlong" blog that it had introduced face-blurring technology for its images of Manhattan. It eventually applied the technology to all locations.

View Map + Bookmark Entry

Brainbow: A Colorful Technique to Visualize Brain Circuitry November 2007

Jeff W. Lichtman

Joshua R. Sanes

Three brainbows of mouse neurons from Lichtman and Sanes, 2008.

A. A motor nerve innervating ear muscle

B. An axon tract in the brain stem

C. The hippocampul dentate gyrus

In November 2007 Jeff W. Lichtman and Joshua R. Sanes, both professors of Molecular & Cellular Biology in the Department of Neurobiology at Harvard Medical School, and colleagues, published "Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system," Nature 450 (7166): 56–62. doi:10.1038/nature06293. This described the visualization process they called "Brainbow."

"Detailed analysis of neuronal network architecture requires the development of new methods. Here we present strategies to visualize synaptic circuits by genetically labelling neurons with multiple, distinct colours. In Brainbow transgenes, Cre/lox recombination is used to create a stochastic choice of expression between three or more fluorescent proteins (XFPs). Integration of tandem Brainbow copies in transgenic mice yielded combinatorial XFP expression, and thus many colours, thereby providing a way to distinguish adjacent neurons and visualize other cellular interactions. As a demonstration, we reconstructed hundreds of neighbouring axons and multiple synaptic contacts in one small volume of a cerebellar lobe exhibiting approximately 90 colours. The expression in some lines also allowed us to map glial territories and follow glial cells and neurons over time in vivo. The ability of the Brainbow system to label uniquely many individual cells within a population may facilitate the analysis of neuronal circuitry on a large scale." (From the Nature abstract).

View Map + Bookmark Entry

ImageNet, an Image Database and Ontology 2008

In 2008 Principal Investigators Li Fei-Fei of Stanford Vision Lab and Kai Li of the Department of Computer Science at Princeton, and associates, advisors and friends, began building ImageNet, an image database and ontology, through a crowdsourcing process. In October 2013 the database contained 14,197,122 images, with 21,841 synsets indexed.

The ImageNet database is organized according to the WordNet hierarchy.

"Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a 'synonym set' or 'synset'. There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy."

Among its many applications, ImageNet provides a standard by which the accuracy of image recognition software can be measured.

View Map + Bookmark Entry

Viewing the Illustrations of a Journal Article in Three Dimensions September 30, 2008

On September 30, 2008 the Optical Society and the National Library of Medicine announced Interactive Science Publishing.

" 'ISP' represents a new direction for OSA publications. The ISP articles, which appear in OSA journals, link out to large 2D and 3D datasets—such as a CT scan of the human chest—that can be viewed interactively with special software developed by OSA in cooperation with Kitware, Inc., and the National Library of Medicine."

View Map + Bookmark Entry

First Images of Extra-Solar Planets Taken from the Visible Spectrum: Planets Located 130 Light-Years from Earth November 13, 2008

On November 13, 2008 NASA and the Lawrence Livermore National Laboratory announced the first-ever pictures taken from the visible spectrum of extrasolar planets. The images were glimpsed by the Gemini North and Keck telescopes on the Mauna Kea mountaintop in Hawaii. 

"British and American researchers snapped the first ever visible-light pictures of three extrasolar planets orbiting the star HR8799.  HR8799 is about 1.5 times the size of the sun, located 130 light-years away in the Pegasus constellation.  Observers can probably see this star through binoculars, scientists said.

"To identify the planets, researchers compared images of the system, known to contain planets HF8799b, HF8799c, and HF8799d.  In each image faint objects were detected, and by comparing images from over the years, it was confirmed that these were the planets in their expected positions and that they orbit their star in a counterclockwise direction.

"NASA's Hubble Space Telescope at about the same time picked up images of a fourth planet, somewhat unexpectedly.  The new planet, Fomalhaut b orbits the bright southern star Fomalhaut, part of the constellation Piscis Australis (Southern Fish) and is relatively massive -- about three times the size of Jupiter.  The planet orbits 10.7 billion miles from its home star and is approximately 25 light-years from Earth."  (quoations from Daily Tech November 16, 2008).

View Map + Bookmark Entry

Google Earth Incorporates Historical Imagery February 2, 2009

On February 2, 2009 Google launched Google Earth 5.0. Among the most significant features were Historical Imagery, Touring, and 3D Mars.

" ♦ Historical Imagery: Until today, Google Earth displayed only one image of a given place at a given time. With this new feature, you can now move back and forth in time to reveal imagery from years and even decades past, revealing changes over time. Try flying south of San Francisco in Google Earth and turning on the new time slider (click the "clock" icon in the toolbar) to witness the transformation of Silicon Valley from a farming community to the tech capital of the world over the past 50 years or so.  

" ♦ Touring: One of the key challenges we have faced in developing Google Earth has been making it easier for people to tell stories. People have created wonderful layers to share with the world, but they have often asked for a way to guide others through them. The Touring feature makes it simple to create an easily sharable, narrated, fly-through tour just by clicking the record button and navigating through your tour destinations.

" ♦ 3D Mars: This is the latest stop in our virtual tour of the galaxies, made possible by a collaboration with NASA. By selecting "Mars" from the toolbar in Google Earth, you can access a 3D map of the Red Planet featuring the latest high-resolution imagery, 3D terrain, and annotations showing landing sites and lots of other interesting features" (Official Google Blog, http://googleblog.blogspot.com/2009/02/dive-into-new-google-earth.html, accessed 11-29-2010).

View Map + Bookmark Entry

The Human Connectome Project July 2009

The Human Connectome Project, a five-year project sponsored by sixteen components of the National Institutes of Health (NIH) split between two consortia of research institutions, was launched as the first of three Grand Challenges of the National Institutes of Health's Blueprint for Neuroscience Research

The project was described as "an ambitious effort to map the neural pathways that underlie human brain function. The overarching purpose of the Project is to acquire and share data about the structural and functional connectivity of the human brain. It will greatly advance the capabilities for imaging and analyzing brain connections, resulting in improved sensitivity, resolution, and utility, thereby accelerating progress in the emerging field of human connectomics. Altogether, the Human Connectome Project will lead to major advances in our understanding of what makes us uniquely human and will set the stage for future studies of abnormal brain circuits in many neurological and psychiatric disorders" (http://www.humanconnectome.org/consortia/, accessed 12-28-2010).

View Map + Bookmark Entry

Imaging a Molecule One Million Times Smaller Than a Grain of Sand August 28, 2009

On August 28, 2009 IBM Research – Zurich scientists Leo Gross, Fabian Mohn, Nikolaj Moll and Gerhard Meyer, in collaboration with Peter Liljeroth of Utrecht University, published "The Chemical Structure of a Molecule Resolved by Atomic Force Microscopy," Science, 2009; 325(5944): 1110 DOI: 10.1126/science.1176210

Using an atomic force microscope operated in an ultrahigh vacuum and at very low temperatures ( –268oC or – 451oF) the scientists imaged the chemical structure of individual pentacene molecules. For the first time ever, they were able to look through the electron cloud and see the atomic backbone of an individual molecule.

The abstract of the article is:

"Resolving individual atoms has always been the ultimate goal of surface microscopy. The scanning tunneling microscope images atomic-scale features on surfaces, but resolving single atoms within an adsorbed molecule remains a great challenge because the tunneling current is primarily sensitive to the local electron density of states close to the Fermi level. We demonstrate imaging of molecules with unprecedented atomic resolution by probing the short-range chemical forces with use of noncontact atomic force microscopy. The key step is functionalizing the microscope’s tip apex with suitable, atomically well-defined terminations, such as CO molecules. Our experimental findings are corroborated by ab initio density functional theory calculations. Comparison with theory shows that Pauli repulsion is the source of the atomic resolution, whereas van der Waals and electrostatic forces only add a diffuse attractive background."

♦ In December 2013 a video of the scientists discussing and explaining this discovery at IBM's Press Room was available at this link.

View Map + Bookmark Entry

David Hockney's iPhone Art October 22, 2009

On October 22, 2009 Lawrence Wechler, director of the New York Institute for the Humanities at New York University,  published "David Hockney's iPhone Passion," New York Review of Books LXVI, no. 16, 35.

Hockney had a history of exploiting new technologies in his art:

"Hockney continued to explore other media besides painting, most notably photography. From 1982-86, he created some of his best-known and most iconographic work — his “joiners,” large composite landscapes and portraits made up of hundreds or thousands of individual photographs. Hockney initially used a Polaroid camera for the photos, switching to a 35 mm camera as the works grew larger and more complex. In interviews, Hockney related the “joiners” to cubism, pointing out that they incorporate elements that a traditional photograph does not possess — namely time, space, and narrative.

"Always willing to adopt new techniques, in 1986 Hockney began producing art with color photocopiers. He has also incorporated fax machines (faxing art to an exhibition in Brazil, for example) and computer-generated images (most notably Quantel Paintbox, a computer system often used to make graphics for television shows) into his work" (http://www.pbs.org/wnet/americanmasters/episodes/david-hockney/the-colors-of-music/103/, accessed 01-09-2010).

View Map + Bookmark Entry

Google Introduces Google Goggles December 8, 2009

On Cember 8, 2009 Google introduced Google Goggles image recognition and search technology for the Android mobile device operating system.  If you photographed certain types of individual objects with your mobile phone the program would recognize them and automatically display links to relevant information on the Internet.If you pointed your phone at a building the program would identify it by GPS and identify it. Then if you clicked on the name of the building it would bring up relevant Internet links.

♦ On May 7, 2010 you could watch a video describing the features of Google Goggles at this link:

http://www.google.com/mobile/goggles/#text

View Map + Bookmark Entry

2010 – 2012

The Vatican Library Plans the Scanning of all its Manuscripts into the FITS Document Format March 24, 2010

"An initiative of the Vatican Library Digital manuscripts

"by Cesare Pasini  

"The digitization of 80,000 manuscripts of the Vatican Library, it should be realized, is not a light-hearted project. Even with only a rough calculation one can foresee the need to reproduce 40 million pages with a mountain of computer data, to the order of 45 petabytes (that is, 45 million billion bytes). This obviously means pages variously written and illustrated or annotated, to be photographed with the highest definition, to include the greatest amount of data and avoid having to repeat the immense undertaking in the future.  

"And these are delicate manuscripts, to be treated with care, without causing them damage of any kind. A great undertaking for the benefit of culture and in particular for the preservation and conservation of the patrimony entrusted to the Apostolic Library, in the tradition of a cultural service that the Holy See continues to express and develop through the centuries, adapting its commitment and energy to the possibilities offered by new technologies.  

"The technological project of digitization with its various aspects is now ready. In the past two years, a technical feasibility study has been prepared with the contribution of the best experts, internal, external and also international. This resulted in a project of a great and innovative value from various points of view: the realization of the photography, the electronic formats for conservation, the guaranteed stability of photographs over time, the maintenance and management of the archives, and so forth.  

"This project may be achieved over a span of 10 years divided into three phases, with possible intervals between them. In a preliminary phase the involvement of 60 people is planned, including photographers and conservator-verifiers, in the second and third phases at least 120. Before being able to initiate an undertaking of this kind, which is causing some anxiety to those in charge of the library (and not only to them!), naturally it will be necessary to find the funds. Moves have already been made in this direction with some positive results.  

"The second announcement is that some weeks ago the “test bed” was set up; in other words the “bench test” that will make it possible to try out and examine the whole structure of the important project that has been studied and formulated so as to guarantee that it will function properly when undertaken in its full breadth.  

"The work of reproduction uses two different machines, depending on the different types of material to be reproduced: one is a Metis Systems scanner, kindly lent to us free of charge by the manufacturers, and a 50 megapixel Hasselblad digital camera. Digitized images will be converted to the Flexible Image Transport System (FITS), a non-proprietary format, is extremely simple, was developed a few decades ago by NASA. It has been used for more than 40 years for the conservation of data concerning spatial missions and, in the past decade, in astrophysics and nuclear medicine. It permits the conservation of images with neither technical nor financial problems in the future, since it is systematically updated by the international scientific community.  

"In addition to the servers that collect the images in FITS format accumulated by the two machines mentioned, another two servers have been installed to process the data to make it possible to search for images both by the shelf mark and the manuscript's descriptive elements, and also and above all by a graphic pattern, that is, by looking for similar images (graphic or figurative) in the entire digital memory.  

"The latter instrument, truly innovative and certainly interesting for all who intend to undertake research on the Vatican's manuscripts – only think of when it will be possible to do such research on the entire patrimony of manuscripts in the Library! – was developed from the technology of the Autonomy Systems company, a leading English firm in the field of computer science, to which, moreover, we owe the entire funding of the “test bed”.  

"For this “bench test”, set up in these weeks, 23 manuscripts are being used for a total of 7,500 digitized and indexed pages, with a mountain of computer data of about 5 terabytes (about 5,000 billion bytes).

"The image of the mustard seed springs to mind: the “text bed” is not much more in comparison with the immensity of the overall project. But we know well that this seed contains an immense energy that will enable it to grow, to become far larger than the other plants and to give hospitality to the birds of the air. In accepting the promise guaranteed in the parable, let us also give hope of it to those who await the results of this project's realization" (http://www.vaticanlibrary.va/home.php?, pag=newsletter_art_00087&BC=11, accessed 03-24-2010).

View Map + Bookmark Entry

Google Acknowledges that it Collected Wi-Fi Information Along with Cartographic and Imaging Information April 27 – June 10, 2010

"Over the weekend, there was a lot of talk about exactly what information Google Street View cars collect as they drive our streets. While we have talked about the collection of WiFi data a number of times before--and there have been stories published in the press--we thought a refresher FAQ pulling everything together in one place would be useful. This blog also addresses concerns raised by data protection authorities in Germany.

"What information are your cars collecting? 

"We collect the following information--photos, local WiFi network data and 3-D building imagery. This information enables us to build new services, and improve existing ones. Many other companies have been collecting data just like this for as long as, if not longer, than Google.

"♦Photos: so that we can build Street View, our 360 degree street level maps. Photos like these are also being taken by TeleAtlas and NavTeq for Bing maps. In addition, we use this imagery to improve the quality of our maps, for example by using shop, street and traffic signs to refine our local business listings and travel directions;

"♦WiFi network information: which we use to improve location-based services like search and maps. Organizations like the German Fraunhofer Institute and Skyhook already collect this information globally;

"♦and 3-D building imagery: we collect 3D geometry data with low power lasers (similar to those used in retail scanners) which help us improve our maps. NavTeq also collects this information in partnership with Bing. As does TeleAtlas.

"What do you mean when you talk about WiFi network information?

"WiFi networks broadcast information that identifies the network and how that network operates. That includes SSID data (i.e. the network name) and MAC address (a unique number given to a device like a WiFi router).

"Networks also send information to other computers that are using the network, called payload data, but Google does not collect or store payload data.*  

"But doesn’t this information identify people? 

"MAC addresses are a simple hardware ID assigned by the manufacturer. And SSIDs are often just the name of the router manufacturer or ISP with numbers and letters added, though some people do also personalize them. However, we do not collect any information about householders, we cannot identify an individual from the location data Google collects via its Street View cars.  

"Is it, as the German DPA states, illegal to collect WiFi network information? 

"We do not believe it is illegal--this is all publicly broadcast information which is accessible to anyone with a WiFi-enabled device. Companies like Skyhook have been collecting this data cross Europe for longer than Google, as well as organizations like the German Fraunhofer Institute.  

"Why did you not tell the DPAs that you were collecting WiFi network information?

"Given it was unrelated to Street View, that it is accessible to any WiFi-enabled device and that other companies already collect it, we did not think it was necessary. However, it’s clear with hindsight that greater transparency would have been better.  

"Why is Google collecting this data?

"The data which we collect is used to improve Google’s location based services, as well as services provided by the Google Geo Location API. For example, users of Google Maps for Mobile can turn on “My Location” to identify their approximate location based on cell towers and WiFi access points which are visible to their device. Similarly, users of sites like Twitter can use location based services to add a geo location to give greater context to their messages.  

"Can this data be used by third parties? 

"Yes--but the only data which Google discloses to third parties through our Geo Location API is a triangulated geo code, which is an approximate location of the user’s device derived from all location data known about that point. At no point does Google publicly disclose MAC addresses from its database (in contrast with some other providers in Germany and elsewhere).

"Do you publish this information?

"No" (http://googlepolicyeurope.blogspot.com/2010/04/data-collected-by-google-cars.html, accessed 05-23-2012).

On June 9, 2010 Google announced in its Official Blog that it had "mistakenly included code" in its software that collected "samples of payload data" from unencrypted WiFi networks, but not from encrypted WiFI networks.  It also announced that in response to requests from the Irish Data Protection Authority it was deleting payload data collected from Irish WiFi networks.

View Map + Bookmark Entry

Google Introduces a Translation Feature for Google Goggles May 6, 2010

On May 6, 2010 Google announced a translation feature for Google Goggles, image recognition and search feature available on Android-based mobile devices.

"Here’s how it works:

"Point your phone at a word or phrase. Use the region of interest button to draw a box around specific words Press the shutter button

"If Goggles recognizes the text, it will give you the option to translate

"Press the translate button to select the source and destination languages."

"Today Goggles can read English, French, Italian, German and Spanish and can translate to many more languages. We are hard at work extending our recognition capabilities to other Latin-based languages. Our goal is to eventually read non-Latin languages (such as Chinese, Hindi and Arabic) as well."

View Map + Bookmark Entry

"The First Image of the Entire Universe" July 5, 2010

From roughly 1,000,000 miles into space, on July 5, 2010 the European Space Agency's Planck space observatory took the first photograph of the entire universe.

View Map + Bookmark Entry

NCBI Introduces Images, a Database of More than 2.5 Million Images in Biomedical Literature October 2010

In October 2010 the National Center for Biotechnology Information (NCBI), a division of the National Library of Medicine (NLM) at the National Institutes of Health (NIH), introduced Images, an online database of more than 2.5 million images and figures from medical and life sciences journals. 

View Map + Bookmark Entry

Instagram is Founded October 2010 – December 17, 2012

In October 2010 Kevin Systrom and Cheyenne Foster launched Instagram, an online photo-sharing and social networking service that enabled users to take a picture, apply a digital filter to it, and share on a variety of networking services, including its own. Instagram was purchased in April 2012 by Facebook for approximately $1 billion in cash and stock.  After regulatory approval the deal closed in September 2012 by which time Instagram had over 100 million users. 

"On December 17, 2012, Instagram updated its Terms of Service to allow Instagram the right to sell users' photos to third parties without notification or compensation after January 16, 2013. The criticism from privacy advocates, consumers and even National Geographic which suspended its Instagram account, prompted Instagram to issue a statement retracting the controversial terms. Instagram is currently working on developing new language to replace the disputed terms of use" (Wikipedia article on Instagram, accessed 12-22-2012).

View Map + Bookmark Entry

The First MRI Video of Childbirth November 2010 – June 2012

In November 2010 the first video of a woman giving birth in an open MRI machine was taken at the Charité Hospital in Berlin, Germany.  The team led by Christian Bamberg, M.D. first published the results as "Human birth observed in real-time open magnetic resonance imaging," in the American Journal of Obstetrics & Gynecology in January 2012.  Supplementary material, including the video of the final 45 minutes of labor, was published  as Vol. 206, issue, pp. 505.e1-505e6, June 2012.

View Map + Bookmark Entry

Google Earth 6: Enhanced 3D, 3D Trees, Enhanced Historical Imagery November 30, 2010

Google Earth 6, introduced on November 30, 2010, enabled the user to "fly from outer space down to the streets with the new Street View and easily navigate. . . . Switch to ground-level view to see the same location in 3D."  

The program also introduced 3D trees in locations all over the world, and a more user-friendly interface for the historical imagery enabling comparison of recent and historical satellite imagery when available.

View Map + Bookmark Entry

The Google Earth Engine December 2, 2010

On December 2, 2010 Google introduced the Google Earth Engine, a cloud computing platform for processing satellite imagery and other Earth observation data. The engine provides access to a large warehouse of satellite imagery and the computational power needed to analyze those images. Initial applications of the platform included mapping the forests of Mexico, identifying water in the Congo basin, and detecting deforestation in the Amazon.

(http://blog.google.org/2010/12/introducing-google-earth-engine.html)

"Google Earth Engine brings together the world's satellite imagery—trillions of scientific measurements dating back more than 25 years—and makes it available online with tools for scientists, independent researchers, and nations to mine this massive warehouse of data to detect changes, map trends and quantify differences to the earth's surface" (http://earthengine.googlelabs.com/#intro).

"On February 11, [2013] NASA launched Landsat 8, the latest in a series of Earth observation satellites which started collecting information about the Earth in 1972. We're excited to announce that on May 30th, the USGS began releasing operational data from the Landsat 8 satellite, which are now available on Earth Engine. Explore the gallery below to see how we've used Landsat data to visualize thirty years of change across the entire planet. Congratulations to NASA and USGS for a successful launch!" (http://earthengine.google.org/#intro, accessed 10-20-2013). 

View Map + Bookmark Entry

Scanning Books in Libraries Instead of Making Photocopies 2011

Ristech, the motto of which was "Automation of Digitization," introduced the Book2net Spirit, which they described as:

"the very first entry level high resolution book scanner. The Spirit is designed to replace photocopies in Public, Government and Corporate Libraries. By eliminating the need for paper, toner and maintenance – Libraries can reduce cost. The Spirit can easily be attached to a cost recovery system or coin-op to generate revenue.

"Key Features:

• Public Use Walk-up BookScanner

• High Resolution Images

• 1 second image capture • Scan to USB or Email

• Embedded Touch Screen PC included"

View Map + Bookmark Entry

Probably the Largest Digital Image January 13, 2011

The Sloan Digital Sky Survey-III (SDSS-III), a major multi-filter imaging and spectroscopic redshift survey using a dedicated 2.5-m wide-angle optical telescope at Apache Point Observatory, Sunspot, New Mexico,  released the largest digital color image of the sky assembled from millions of 2.8 megapixel images, and consisting of more than a trillion pixels.  This may be the largest digital image produced to date.

View Map + Bookmark Entry

The Google Art Project February 1, 2011

Bringing technology developed for Street View indoors, Google introduced The Art Project.  Simultaneously they introduced an Art Project channel on YouTube.

These projects allowed you to take virtual tours of major museums, view relevant background material about art, store high resolution images, share images and commentaries with friends.

Each of the 17 museums involved also chose one artwork to be photographed using gigapixel photo capturing technology, resulting in an image on the computer containing seven billion pixels and providing detail not visible to the naked eye.

View Map + Bookmark Entry

The Largest Interior Image: The Strahov Monastery Library March 29, 2011

360cities.net posted a 40 gigabyte panorama of the baroque Philosophical Hall containing 42,000 volumes in the Strahov Monastery Library in Prague.  

The spectacular image is particularly useful since tourists visiting the monastery may only glimpse this library room from one roped-off entrance. When the image was posted on YouTube and on 360cities.net it was the largest interior panoramic image taken to date, showing all aspects of the room in the smallest detail.

♦ An article published in Wired magazine on March 29, 2011 provided production details, multiple images, and a video showing how the panorama was created.

View Map + Bookmark Entry

Snapchat: Communication and Automatic Destruction of Information September 2011

In September 2011 Stanford University students Evan Spiegel and Robert Murphy produced the initial release of the photo messaging application Snapchat, famously launching the program "from Spiegel's father's living room." Users of the app take photos, record videos, add text and drawings, and send them to a controlled list of recipients. Photographs and videos sent through the app are known as "Snaps". Users set a time limit for how long recipients can view their Snaps, after which the photos or videos are hidden from the recipient's device and deleted from Snapchat's servers. In December 2013 the range was from 1 to 10 seconds. 

In November 2013 it was reported that Snapchat was sharing 400 million photos per day—more than Facebook.

"Founder Evan Spiegel explained that Snapchat is intended to counteract the trend of users being compelled to manage an idealized online identity of themselves, which he says has "taken all of the fun out of communicating". Snapchat can locate a user's friends through the user's smartphone contact list. Research conducted in the UK has shown that, as of June 2013, half of all 18 to 30-year-old respondents (47 percent) have received nude pictures, while 67 percent had received images of "inappropriate poses or gestures".

"Snapchat launched the "Snapchat Stories" feature in early October 2013 and released corresponding video advertisements with the tagline "It's about time." The feature allows users to create links of shared content that can be viewed an unlimited number of times over a 24-hour period. The "stories" are simultaneously shared with the user's friends and content remains for 24 hours before disappearing.

"Another controversy surrounding the rising popularity of Snapchat in the United States relates to a phenomenon known as sexting. This involves the sending and receiving of explicit images that often involve some degree of nudity. Because the application is commonly used by younger generations, often below the age of eighteen, the question has been raised whether or not certain users are technically distributing child pornography. For this reason, many adults disapprove of their children's use of the application. Snapchat's developers continue to insist that the application is not sexting-friendly and that they do not condone any kind of pornographic use.

"On November 14, 2013, police in LavalQuebec, Canada arrested 10 boys aged 13 to 15 on child pornography charges after the boys allegedly captured and shared explicit photos of teenage girls sent through Snapchat as screenshots.

"In February 2013, a study by market research firm Survata found that mobile phone users are more likely to "sext over SMS than over Snapchat" (Wikipedia article on Snapchat, accessed 12-12-2013).

View Map + Bookmark Entry

2012 – 2016

NYPL Labs Introduces the Stereogranimator January 2012

In January 2012 NYPL Labs, the digital library development division of the New York Public Library, introduced the Stereogranimator, a website and collaborative program to turn digital copies of analog stereographic photograph pairs into shareable 3D web formats.

"Stereographs, produced by the millions between the 1850s and the 1930s, were a wildly popular form of entertainment, giving viewers a taste of the kind of richly rounded images now readily available on screens of all sizes. No motion was involved, however. Instead, viewers looked through a stereoscope at two slightly different photographs of the same scene, which the brain was tricked into perceiving as a single three-dimensional image.

"The Stereogranimator . . . uses GIF animation to create the illusion of three-dimensionality by flickering back and forth between the two images. Users can adjust the speed, as well as the spatial jump between the images. The tool also generates an old-fashioned anaglyph, one of those blurry, two-toned images that snap into rounded focus when viewed through a stereoscope or vintage blue-red 3-D glasses. . . ." (http://artsbeat.blogs.nytimes.com/2012/01/26/3-d-it-yourself-thanks-to-new-library-site/, accessed 11-02-2013).

The Stereogranimator grew out of a project originated by writer / photographer Joshua Heineman, who in 2008 observed that

"The parallax effect of minor changes between the two perspectives created a sustained sense of dimension that approximated the effect of stereo viewing. When I realized how the effect was working, I set about discovering if I could capture the same illusion by layering both sides of an old stereograph in Photoshop & displaying the result as an animated gif. The effect was more jarring than through a stereoscope but no less magic" (http://stereo.nypl.org/about, accessed 11-02-2013).

View Map + Bookmark Entry

Google Introduces the Knowledge Graph May 16, 2012

"The Knowledge Graph enables you to search for things, people or places that Google knows about—landmarks, celebrities, cities, sports teams, buildings, geographical features, movies, celestial objects, works of art and more—and instantly get information that’s relevant to your query. This is a critical first step towards building the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.

"Google’s Knowledge Graph isn’t just rooted in public sources such as Freebase, Wikipedia and the CIA World Factbook. It’s also augmented at a much larger scale—because we’re focused on comprehensive breadth and depth. It currently contains more than 500 million objects, as well as more than 3.5 billion facts about and relationships between these different objects. And it’s tuned based on what people search for, and what we find out on the web.

"The Knowledge Graph enhances Google Search in three main ways to start:  

"1. Find the right thing Language can be ambiguous—do you mean Taj Mahal the monument, or Taj Mahal the musician? Now Google understands the difference, and can narrow your search results just to the one you mean—just click on one of the links to see that particular slice of results:

"2. Get the best summary With the Knowledge Graph, Google can better understand your query, so we can summarize relevant content around that topic, including key facts you’re likely to need for that particular thing. For example, if you’re looking for Marie Curie, you’ll see when she was born and died, but you’ll also get details on her education and scientific discoveries:

"3. Go deeper and broader Finally, the part that’s the most fun of all—the Knowledge Graph can help you make some unexpected discoveries. You might learn a new fact or new connection that prompts a whole new line of inquiry. Do you know where Matt Groening, the creator of the Simpsons (one of my all-time favorite shows), got the idea for Homer, Marge and Lisa’s names? It’s a bit of a surprise:

"We’ve always believed that the perfect search engine should understand exactly what you mean and give you back exactly what you want. And we can now sometimes help answer your next question before you’ve asked it, because the facts we show are informed by what other people have searched for. For example, the information we show for Tom Cruise answers 37 percent of next queries that people ask about him. In fact, some of the most serendipitous discoveries I’ve made using the Knowledge Graph are through the magical “People also search for” feature. One of my favorite books is The White Tiger, the debut novel by Aravind Adiga, which won the prestigious Man Booker Prize. Using the Knowledge Graph, I discovered three other books that had won the same prize and one that won the Pulitzer. I can tell you, this suggestion was spot on!"

View Map + Bookmark Entry

A 3D Virtual Reality Reader for eBooks October 2012

In October 2012 the Münchener Digitalisierungs Zentrum of the Bayerische Staatsbibliothek, München (Munich Digitization Center of the Bavarian State Library in Munich) introduced the 3D-BSB Explorer, a gesture-controlled 3D Interactive Book Reader developed jointly by the center and the Fraunhofer Heinrich Hertz Institute.

"For the first time ever, magnificent over one thousand year old books are also on view in a digital 3D format at the "Magnificent Manuscripts – Treasures of Book Illumination" exhibition at the Kunsthalle of the Hypo Cultural Foundation in Munich. The Interactive 3D BookReader forms part of the exhibition which opens on Friday, 19 October 2012 at the Kunsthalle of the Hypo Cultural Foundation in Munich.  

"Allowing visitors to leaf through volumes illuminated in gold and encrusted with precious stones is something that most museums simply cannot permit. Secure in their glass cases, these exhibits seem remote and untouchable. Yet with the Interactive 3D BookReader, developed by the Fraunhofer Heinrich Hertz Institute in partnership with the Bavarian State Library, visitors can now not only view digitalized books in 3D without any need for special glasses, but browse through them, enlarge them and rotate them as well. The Interactive 3D BookReader opens up virtual access to these magnificent treasures of the art of illumination. Visitors don’t even need to touch the screen as an infrared camera captures the movements of one or more of their fingers while image processing software identifies their position in space in real-time. This is how they can move, browse, rotate and scale the exhibits shown on the screen. Even the slightest of finger movements can be translated into movements of the cursor. The monitor screen of the Interactive 3D BookReader shows the user's right and left eye two slightly offset images which combine to give an in-depth impression. The two stereo views are adapted to correspond to the viewer's actual position. This means that visitors don't need special 3D glasses to view the books in three dimensions" (http://www.hhi.fraunhofer.de/media/press/experience-magnificent-books-in-digital-3d.html, accessed 02-23-2013).

In February 2013 a video demonstration of the 3D-BSB Explorer was available on YouTube at this link: http://www.youtube.com/watch?v=LpSP2ojWtIs&feature=youtu.be

View Map + Bookmark Entry

The First 3D Photo Booth Prints Personal Miniature Figures November 12, 2012 – August 9, 2013

On November 12, 2012 designboom.com reported on a limited edition pop-up installation developed by the Japanese firm omote3D.com that reproduces personal detailed miniature action figures.

"ranging from 10 to 20 centimetres in height, the system utilizes a three-dimensional camera and printer to process and scan users, creating custom scale reproductions. The three-step procedure requires the user to keep still for 15 minutes while the scanners capture the data" (http://www.designboom.com/art/personal-action-figures-printed-at-a-japanese-photo-booth/, accessed 08-11-2013).

On August 9, 2013 designboom.com reported on an expansion of the concept developed and commercialized by Twinkind.com in Hamburg, Germany.

"ever imagined a true-to-life miniature version of yourself? well - now it's possible. these 3D printed portrait figurines by twinkind are made using state-of-the art 3D scanning and color printing technology. the miniatures are available to anyone who can make it to twinkind's studio in hamburg, with a 15cm tall figure costing €225 and a 35cm model coming in at €1290. several other size options are also available" (http://www.designboom.com/technology/3d-printed-portrait-figurines-by-twinkind/?utm_campaign=daily&utm_medium=e-mail&utm_source=subscribers, accessed 08-11-2013).

View Map + Bookmark Entry

After Cell Phones With Cameras, Android Cameras- Without Cellphones- are Introduced December 19, 2012

Once cell phone cameras with their very limited lenses and image processors became the most popular means of taking photographs, mainly because cell phone images could immediately be emailed, posted to websites, social media, etc., it was probably inevitable that camera companies would introduce regular more full-featured cameras incorporating computers that could be connected to the Internet through Internet "hot spots" or cellular connections. The first models offered at the end of 2012 were full-featured and overpriced, but the concept appeared to have great potential: 

"New models from Nikon and Samsung are obvious graduates of the 'if you can’t beat ’em, join ’em' school. The Nikon Coolpix S800C ($300) and Samsung’s Galaxy Camera ($500 from AT&T, $550 from Verizon) are fascinating hybrids. They merge elements of the cellphone and the camera into something entirely new and — if these flawed 1.0 versions are any indication — very promising.  

"From the back, you could mistake both of these cameras for Android phones. The big black multitouch screen is filled with app icons. Yes, app icons. These cameras can run Angry Birds, Flipboard, Instapaper, Pandora, Firefox, GPS navigation programs and so on. You download and run them exactly the same way. (That’s right, a GPS function. “What’s the address, honey? I’ll plug it into my camera.”) But the real reason you’d want an Android camera is wirelessness. Now you can take a real photo with a real camera — and post it or send it online instantly. You eliminate the whole 'get home and transfer it to the computer' step.  

"And as long as your camera can get online, why stop there? These cameras also do a fine job of handling Web surfing, e-mail, YouTube videos, Facebook feeds and other online tasks. Well, as fine a job as a phone could do, anyway.  

"You can even make Skype video calls, although you won’t be able to see your conversation partner; the lens has to be pointing toward you. Both cameras get online using Wi-Fi hot spots. The Samsung model can also get online over the cellular networks, just like a phone, so you can upload almost anywhere" (Pogue's Posts, NYTimes.com, 12-19-2012, accessed 12-21-2012).  

View Map + Bookmark Entry

Making the iPhone 5 Look and Feel Like a Traditional Camera: the gizmon iCa case February 2013

After cell phones cameras became the most popular way of taking pictures, it was probably inevitable that a way would be found to make them look and act like cameras:

"now available for the iPhone 5, the 'gizmon iCa' polycarbonate case transforms your smartphone into a working rangefinder camera. a working shutter button is built into the top of the case - making it easy to capture images without having to pre-load the camera interface app. incorporated with a viewfinder on top of the enclosure - the design helps eliminate glare in direct sunlight, as with an additional lens opening from the flash unit. the case also ships with a second interchangeable section that allows for the fitting of any of the accessory lenses" (http://www.designboom.com/technology/the-gizmon-ica-5-case-for-the-iphone-5/, accessed 02-07-2013).

Gizmon, a division of ADPLUS Co. Ltd, Kumamoto-city, Kumamoto, Japan, also produced a series of ad-one lenses and filters for the iPhone that could be used without the iCA polycarbonate case.

View Map + Bookmark Entry

Software Turns a Smartphone into a 3D Scanner December 5, 2013

On December 5. 2013 scientists led by Marc Pollefeys, head of the Computer Vision and Geometry Group in the Institute of Visual Computing at ETH Zurich announced that they developed an app that turned an ordinary Android smartphone into a 3D scanner. Marc Pollefeys commented that two years ago software of this type would have been expected to run only on large computers. "That this works on a smartphone would have been unthinkable."

Rather than taking a regular photograph, a user moves the phone and its camera around the object being scanned, and after a few motions, a three dimensional model appears on the screen. As the user keeps moving the phone and its camera, additional images are recorded automatically, extending the wireframe of the virtual object. Because all calculations are programmed into the software, the user gets immediate feedback and can select additional viewpoints to cover missing parts of the rendering. The system utilizes the inertial sensors of the phone, extracting the camera views in real-time based on kinetic motion capture. The resulting 360 degree model can be used for visualization or augmented reality applications, or rapid prototyping with CNC (Computer Numerical Control) machines and 3D printers.

Because the app worked even in low light conditions, such as in museums and churches, it was suggested that a visitor in a museum could scan a sculpture and consider it later at home or at work.

In December 2013 a YouTube video showing how the 3D scanning app worked as well as examples of 3D printed objects made from cell phone scans were available at this link.

View Map + Bookmark Entry

A Neural Network that Reads Millions of Street Numbers January 1, 2014

To read millions of street numbers on buildings photographed for Google StreetView, Google built a neural network that developed reading accuracy comparable to humans assigned to the task. The company uses the images to read house numbers and match them to their geolocation, storing the geolocation of each building in its database. Having the street numbers matched to physical location on a map is always useful, but it is particularly useful in places where street numbers are otherwise unavailable, or in places such as Japan and South Korea, where streets are rarely numbered in chronological order, but in other ways, such as the order in which they were constructed— a system that makes many buildings impossibly hard to find, even for locals.

"Recognizing arbitrary multi-character text in unconstrained natural photographs is a hard problem. In this paper, we address an equally hard sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from Street View imagery. Traditional approaches to solve this problem typically separate out the localization, segmentation, and recognition steps. In this paper we propose a unified approach that integrates these three steps via the use of a deep convolutional neural network that operates directly on the image pixels. We employ the DistBelief implementation of deep neural networks in order to train large, distributed neural networks on high quality images. We find that the performance of this approach increases with the depth of the convolutional network, with the best performance occurring in the deepest architecture we trained, with eleven hidden layers. We evaluate this approach on the publicly available SVHN dataset and achieve over 96% accuracy in recognizing complete street numbers. We show that on a per-digit recognition task, we improve upon the state-of-the-art and achieve 97.84% accuracy. We also evaluate this approach on an even more challenging dataset generated from Street View imagery containing several tens of millions of street number annotations and achieve over 90% accuracy. Our evaluations further indicate that at specific operating thresholds, the performance of the proposed system is comparable to that of human operators. To date, our system has helped us extract close to 100 million physical street numbers from Street View imagery worldwide."

Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, Vinay Shet, "Multi-digit Number Recognition from Street ViewImagery using Deep Convolutional Neural Networks," arXiv:1312.6082v2.

View Map + Bookmark Entry

The First Project to Investigate the Use of Instagram During a Social Upheaval February 17 – February 22, 2014

On October 14, 2014 computer scientist and new media theorist Lev Manovich  of the The Graduate Center, City University of New York informed the Humanist Discussion Group of the project by his Software Studies Initiative entitled The Exceptional & The Everyday: 144 Hours in Kiev. This was the first project analyzing the use of Instagram images during a social upheaval using computational and data visualization techniques. The project explored 13,203 Instagram images shared by 6,165 people in the central area of Kiev, Ukraine during the 2014 Ukrainian revolution from February 17 to February 22, 2014. Collaborators on the project included Mehrdad Yazdani of the University of California, San Diego, Alise Tifentale, a PhD student in art history at The Graduate Center,City University of New York, and Jay Chow, a web developer in San Diego. The project seems to have been first publicized on the web by FastCompany and TheGuardian on October 8, 2014.

"CONTENTS:

Visualizations and Analysis: Visualizing the images and data and interpreting the patterns. 

Context and Methods: Brief summary of the events in Kiev during February 17-22, 2014; our research methods. 

Iconography of the Revolution: What are the popular visual themes in Instagram images of a revolution? (essay by Alise Tifentale).

The Infra-ordinary City: Representing the ordinary from literature to social media (essay by Lev Manovich). 

The Essay: "Hashtag #Euromaidan: What Counts as Political Speech on Instagram?" (guest essay by Elizabeth Losh).

Constructing the dataset: Constructing the dataset for the project; data privacy issues.

References: Bibliography of relevant articles and projects.

PUBLICATION:

Lev Manovich, Alise Tifentale, Mehrdad Yazdani, and Jay Chow. "The Exceptional and the Everyday: 144 Hours in Kiev." The 2nd Workshop on Big Humanities Data held in conjunction with IEEE Big Data 2014 Conference, forthcoming 2014.

ABOUT THE PROJECT

The Exceptional and the Everyday: 144 hours in Kiev continues previous work of our lab (Software Studies Initiative,softwarestudies.com) with visual social media: phototrails.net (analysis and visualization of 2.3 Instagram photos in 14 global cities, 2013; selfiecity.net (comparison between 3200 selfie photos shared in six cities, 2014; collaboration with Moritz Stefaner). In the new project we specifically focus on the content of images, as opposed to only their visual characteristics. We use computational analysis to locate typical Instagram compositions and manual analysis to identify the iconography of a revolution. We also explore non-visual data that accompanies the images: most frequent tags, the use of English, Ukrainian and Russian languages, dates and times when images their shared, and their geo-coordinates." 

View Map + Bookmark Entry

Selfiecity.net. Analysis and Visualization of Thousands of Selfie Photos. . . . February 25, 2014

On February 25, 2014 I received this email from "new media" theorist Lev Manovich via the Humanist Discussion Group, announcing the launch of a cutting edge website analyzing the "Selfie" phenomenon: 

 "Date: Sat, 22 Feb 2014 21:00:30 +0000
        From: Lev Manovich <manovich@softwarestudies.com>
        Subject: Inntroducing selfiecity.net  - analysis and visualization of thousands of selfies photos from five global cities

"Welcome to Selfiecity!
http://selfiecity.net/

I'm excited to announce the launch of our new research project selfiecity.net. The website presents analysis and interactive visualizations of 3,200 Instagram selfie photos, taken between December 4 and 12, 2013, in Bangkok, Berlin, Moscow, New York, and São Paulo.

The project explores how people represent themselves using mobile photography in social media by analyzing the subjects’ demographics, poses, and expressions.

Selfiecity (http://softwarestudies.us2.list-manage.com/track/click?u=67ffe3671ec85d3bb8a9319ca&id=edb72af8ec&e=8a08a35e11) investigates selfies using a mix of theoretic, artistic and quantitative methods:

* Rich media visualizations in the Imageplots section assemble thousands of photos to reveal interesting patterns.
* An interactive component of the website, a custom-made app Selfiexploratory invites visitors to filter and explore the photos themselves.
* Theory and Reflection section of the website contribute to the discussion of the findings of the research. The authors of the essays are art historians Alise Tifentale (The City University of New York, The Graduate Center) and Nadav Hochman (University of Pittsburgh) as well as media theorist Elizabeth Losh (University of California, San Diego).

The project is led by Dr. Lev Manovich, leading expert on digital art and culture; Professor of Computer Science, The Graduate Center, CUNY; Director, Software Studies Initiative."

Considering the phenomenon that selfies had become, I was not surprised when two days later reference was made, also via the Humanist Discussion Group, to  "a very active Facebook group https://www.facebook.com/groups/664091916962292/ 'The Selfies Research Network'." When I looked at this page in February 2014 the group had 298 members, mostly from academia, but also including professionals in fields like social media, from many different countries.

View Map + Bookmark Entry

DeepFace, Facial Verification Software Developed at Facebook, Approaches Human Ability March 17, 2014

On March 17, 2014 MIT Technology Review published an article by Tim Simonite on Facebook's facial recognition software, DeepFace, which I quote:

"Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.

"That’s a significant advance over previous face-matching software, and it demonstrates the power of a new approach to artificial intelligence known as deep learning, which Facebook and its competitors have bet heavily on in the past year (see 'Deep Learning'). This area of AI involves software that uses networks of simulated neurons to learn to recognize patterns in large amounts of data.

"'You normally don’t see that sort of improvement,' says Yaniv Taigman, a member of Facebook’s AI team, a research group created last year to explore how deep learning might help the company (see 'Facebook Launches Advanced AI Effort'). 'We closely approach human performance,' says Taigman of the new software. He notes that the error rate has been reduced by more than a quarter relative to earlier software that can take on the same task.

"Facebook’s new software, known as DeepFace, performs what researchers call facial verification (it recognizes that two images show the same face), not facial recognition (putting a name to a face). But some of the underlying techniques could be applied to that problem, says Taigman, and might therefore improve Facebook’s accuracy at suggesting whom users should tag in a newly uploaded photo.

"However, DeepFace remains purely a research project for now. Facebook released a research paper on the project last week, and the researchers will present the work at the IEEE Conference on Computer Vision and Pattern Recognition in June. 'We are publishing our results to get feedback from the research community,' says Taigman, who developed DeepFace along with Facebook colleagues Ming Yang and Marc’Aurelio Ranzato and Tel Aviv University professor Lior Wolf.

"DeepFace processes images of faces in two steps. First it corrects the angle of a face so that the person in the picture faces forward, using a 3-D model of an 'average' forward-looking face. Then the deep learning comes in as a simulated neural network works out a numerical description of the reoriented face. If DeepFace comes up with similar enough descriptions from two different images, it decides they must show the same face.

"The performance of the final software was tested against a standard data set that researchers use to benchmark face-processing software, which has also been used to measure how humans fare at matching faces.

"Neeraj Kumar, a researcher at the University of Washington who has worked on face verification and recognition, says that Facebook’s results show how finding enough data to feed into a large neural network can allow for significant improvements in machine-learning software. 'I’d bet that a lot of the gain here comes from what deep learning generally provides: being able to leverage huge amounts of outside data in a much higher-capacity learning model,' he says.

"The deep-learning part of DeepFace consists of nine layers of simple simulated neurons, with more than 120 million connections between them. To train that network, Facebook’s researchers tapped a tiny slice of data from their company’s hoard of user images—four million photos of faces belonging to almost 4,000 people. 'Since they have access to lots of data of this form, they can successfully train a high-capacity model,' says Kumar.

View Map + Bookmark Entry

Indexing and Sharing 2.6 Million Images from eBooks in the Internet Archive August 29, 2014

On August 29, 2014 the Internet Archive announced that data mining and visualization expert Kalev Leetaru, Yahoo Fellow at Georgetown University, extracted over 14 million images from two million Internet Archive public domain eBooks spanning over 500 years of content. Of the 14 million images, 2.6 million were uploaded to Flickr, the image-sharing site owned by Yahoo, with a plan to upload more in the near future. 

Also on August 29, 2014 BBC.com carried a story entitled "Millions of historic images posted to Flickr," by Leo Kelion, Technology desk editor, from which I quote:

"Mr Leetaru said digitisation projects had so far focused on words and ignored pictures.

" 'For all these years all the libraries have been digitising their books, but they have been putting them up as PDFs or text searchable works,' he told the BBC.

"They have been focusing on the books as a collection of words. This inverts that. . . .

"To achieve his goal, Mr Leetaru wrote his own software to work around the way the books had originally been digitised.

"The Internet Archive had used an optical character recognition (OCR) program to analyse each of its 600 million scanned pages in order to convert the image of each word into searchable text.

"As part of the process, the software recognised which parts of a page were pictures in order to discard them.

"Mr Leetaru's code used this information to go back to the original scans, extract the regions the OCR program had ignored, and then save each one as a separate file in the Jpeg picture format.

"The software also copied the caption for each image and the text from the paragraphs immediately preceding and following it in the book.

"Each Jpeg and its associated text was then posted to a new Flickr page, allowing the public to hunt through the vast catalogue using the site's search tool. . . ."

View Map + Bookmark Entry

Google Develops A Neural Image Caption Generator to Translate Images into Words November 17, 2014

Having previously transformed the machine translation process by developing algorithms from vector space mathematics, in November 2014 Oriol Vinyals and colleagues at Google in Mountain View developed a neural image caption generator to translate images into words. Google's machine translation approach is:

"essentially to count how often words appear next to, or close to, other words and then define them in an abstract vector space in relation to each other. This allows every word to be represented by a vector in this space and sentences to be represented by combinations of vectors.

"Google goes on to make an important assumption. This is that specific words have the same relationship to each other regardless of the language. For example, the vector “king - man + woman = queen” should hold true in all languages. . . .

"Now Oriol Vinyals and pals at Google are using a similar approach to translate images into words. Their technique is to use a neural network to study a dataset of 100,000 images and their captions and so learn how to classify the content of images.

"But instead of producing a set of words that describe the image, their algorithm produces a vector that represents the relationship between the words. This vector can then be plugged into Google’s existing translation algorithm to produce a caption in English, or indeed in any other language. In effect, Google’s machine learning approach has learnt to “translate” images into words.

To test the efficacy of this approach, they used human evaluators recruited from Amazon’s Mechanical Turk to rate captions generated automatically in this way along with those generated by other automated approaches and by humans.

"The results show that the new system, which Google calls Neural Image Caption, fares well. Using a well known dataset of images called PASCAL, Neural image Capture clearly outperformed other automated approaches. “NIC yielded a BLEU score of 59, to be compared to the current state-of-the-art of 25, while human performance reaches 69,” says Vinyals and co" (http://www.technologyreview.com/view/532886/how-google-translates-pictures-into-words-using-vector-space-mathematics/, accessed 01-14-2015).

Vinyals et al, "Show and Tell: A Neural Image Captional Generator" (2014) http://arxiv.org/pdf/1411.4555v1.pdf

"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In thispaper we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used
to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify
both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU score improvements on Flickr30k, from 55 to 66, and on SBU, from 19 to 27" (Abstract).

View Map + Bookmark Entry

A Machine Vision Algorithm Learns to Attribute Paintings to Specific Artists May 2015

In May 2015 Babak Saleh and Ahmed Elgammal of the Department of Compuer Science, Rutgers University, described an algorithm that could recognize the Style, Genre, and Artist of a painting.

"Saleh and Elgammal begin with a database of images of more than 80,000 paintings by more than a 1,000 artists spanning 15 centuries. These paintings cover 27 different styles, each with more than 1,500 examples. The researchers also classify the works by genre, such as interior, cityscape, landscape, and so on.

"They then take a subset of the images and use them to train various kinds of state-of-the-art machine-learning algorithms to pick out certain features. These include general, low-level features such as the overall color, as well as more advanced features that describe the objects in the image, such as a horse and a cross. The end result is a vector-like description of each painting that contains 400 different dimensions.

"The researchers then test the algorithm on a set of paintings it has not yet seen. And the results are impressive. Their new approach can accurately identify the artist in over 60 percent of the paintings it sees and identify the style in 45 percent of them.

"But crucially, the machine-learning approach provides an insight into the nature of fine art that is otherwise hard even for humans to develop. This comes from analyzing the paintings that the algorithm finds difficult to classify.

"For example, Saleh and Elgammal say their new approach finds it hard to distinguish between works painted by Camille Pissarro and Claude Monet. But a little research on these artists quickly reveals both were active in France in the late 19th and early 20th centuries and that both attended the Académie Suisse in Paris. An expert might also know that Pissarro and Monet were good friends and shared many experiences that informed their art. So the fact that their work is similar is no surprise.

"As another example, the new approach confuses works by Claude Monet and the American impressionist Childe Hassam, who, it turns out, was strongly influenced by the French impressionists and Monet in particular.  These are links that might take a human some time to discover" (MIT Technology Review May 11, 2015).

Saleh, Babak, and Elgammal, Ahmed," Large-scale Classification of Fine-Art Paintings; Learning the Right Metric on the Right Feature" (http://arxiv.org/pdf/1505.00855v1.pdf, 5 May 2015.

View Map + Bookmark Entry