4406 entries. 94 themes. Last updated December 26, 2016.

Computing in Medicine & Biology Timeline

Theme

2,800,000 BCE – 8,000 BCE

The Oldest Almost Complete Mitochondrial Genome Sequence of a Hominin Circa 400,000 BCE

The "Homo Heidelbergensis Cranium 5" from Sima de los Huesos in Spain.

The exterior of the Denivosa Cave

Molar found in Denisova Cave of the Altay Mountains in Southern Siberia.

On December 4, 2013 Matthias Meyer, Eduald Carbonell and Svante Pääbo and colleagues reported that the almost complete mitochondrial genome sequence of a hominin from Sima de los Huesos in Spain, dating back roughly 400,000 years, shows that it is closely related to the lineage leading to mitochonrial genomes of Denisovans, an eastern Eurasian sister group to Neanderthals.

"The fossil, a thigh bone found in Spain, had previously seemed to many experts to belong to a forerunner of Neanderthals. But its DNA tells a very different story. It most closely resembles DNA from an enigmatic lineage of humans known as Denisovans. Until now, Denisovans were known only from DNA retrieved from 80,000-year-old remains in Siberia, 4,000 miles east of where the new DNA was found.

"The mismatch between the anatomical and genetic evidence surprised the scientists, who are now rethinking human evolution over the past few hundred thousand years. It is possible, for example, that there are many extinct human populations that scientists have yet to discover. They might have interbred, swapping DNA. Scientists hope that further studies of extremely ancient human DNA will clarify the mystery" (http://www.nytimes.com/2013/12/05/science/at-400000-years-oldest-human-dna-yet-found-raises-new-mysteries.html?hp&_r=0, accessed 12-04-2013).

Meyer et al, "A mitochondrial genome sequence of a hominin from Sima de ls Huesos", Nature (2013) doi:10.1038/nature12788.

View Map + Bookmark Entry

The First Complete Neanderthal Genome Sequence Circa 128,000 BCE

Svante Pääbo.

A map of the Altai Mountain range.

On December 18, 2013 Svante Pääbo and colleagues from the Department of Evolutionary Genetics, Max Planck Institute for Evolutionary Anthropology in Leipzig, together with scientists from research centers in America, China, Russia and other countries, announced that they sequenced the complete genome of a 130,000 year old Neanderthal woman from a single toe found in a Siberian cave in the Altai Mountains. There DNA evidence has been unusually well preserved because of very low average temperature. Comparison of this complete Neanderthal genome with those of 25 modern humans enabled the authors to compile a list of mutations that evolved in modern humans after their ancestors branched off from Neanderthals some 600,000 years ago. "The list of modern human things is quite short," said John Hawks, a paleoanthropologist at the University of Wisconsin who was not involved in the study. The paper, published in the journal Nature, was entitled "The complete genome sequence of a Neanderthal from the Altai Mountains"  doi:10.1038/nature12886.

The abstract read as follows:

"We present a high-quality genome sequence of a Neanderthal woman from Siberia. We show that her parents were related at the level of half-siblings and that mating among close relatives was common among her recent ancestors. We also sequenced the genome of a Neanderthal from the Caucasus to low coverage. An analysis of the relationships and population history of available archaic genomes and 25 present-day human genomes shows that several gene flow events occurred among Neanderthals, Denisovans and early modern humans, possibly including gene flow into Denisovans from an unknown archaic group. Thus, interbreeding, albeit of low magnitude, occurred among many hominin groups in the Late Pleistocene. In addition, the high-quality Neanderthal genome allows us to establish a definitive list of substitutions that became fixed in modern humans after their separation from the ancestors of Neanderthals and Denisovans."

View Map + Bookmark Entry

Scientists Sequence Woolly Mammoth Genome--the First of an Extinct Animal Circa 100,000 BCE

The largest European specimen of a Wooly Mammoth.

A Steppe Mammoth skull in Sibera.

A male Asian Elephant in India.

A chart from the Mammoth Genome Project depicting gene-encoding bases on chromosomes of both a human and a mammoth. 

On November 19, 2008 scientists from the Mammoth Genome Project at Pennsylvania State University, University Park, reported the genome-wide sequence of the woolly mammoth, an extinct species of elephant that was adapted to living in the cold environment of the northern hemisphere.  The woolly mammoth, Mammuthus primigenius, was a species of mammoth, the common name for the extinct elephant genus Mammuthus. One of the last in a line of mammoth species, it diverged from the steppe mammothM. trogontherii, about 200,000 years ago in eastern Asia. Its closest extant relative is the Asian elephant.

The genome sequence of the woolly mammoth was the first sequence of the genome of an extinct animal, and it opened up the possibility of reconstructing species from the last Ice Age.

"They sequenced four billion DNA bases using next-generation DNA-sequencing instruments and a novel approach that reads ancient DNA highly efficiently."

'Previous studies on extinct organisms have generated only small amounts of data," said Stephan C. Schuster, Penn State professor of biochemistry and molecular biology and the project's other leader. "Our dataset is 100 times more extensive than any other published dataset for an extinct species, demonstrating that ancient DNA studies can be brought up to the same level as modern genome projects' (quoted from Genetic Engineering and Biotechnology News accessed 11-21-2008).

" 'By deciphering this genome we could, in theory, generate data that one day may help other researchers to bring the woolly mammoth back to life by inserting the uniquely mammoth DNA sequences into the genome of the modern-day elephant,' Stephan Schuster of Pennsylvania State University, who helped lead the research, said in a statement." (quoted from Reuters 11-19-2008, accessed 11-21-2008).

"The appearance and behaviour of this species are among the best studied of any prehistoric animal due to the discovery of frozen carcasses in Siberia and Alaska, as well as skeletons, teeth, stomach contents, dung, and depiction from life in prehistoric cave paintings. Mammoth remains had long been known in Asia before they became known to Europeans in the 17th century. The origin of these remains was long a matter of debate, and often explained as being remains oflegendary creatures. The animal was only identified as an extinct species of elephant by Georges Cuvier in 1796." (Wikipedia article on Woolly Mammoth, accessed 10-31-2013).

View Map + Bookmark Entry

Computational Micro-Biomechanical Analysis of Neanderthal's Fossilized Hyoid Bone Suggests that Neanderthals Could Speak Circa 60,000 BCE

A computational micro-biomechanical analysis of a Neanderthal hyoid bone found in Kebara Cave, Israel, suggests that Neanderthals could speak. This was suspected since discovery in 1989 of a Neanderthal hyoid that looked like that of humans. A study published in December 2013 suggests that not only did the bone resemble that of humans but it was also used in a similar way.

"Stephen Wroe, from the University of New England, Armidale, NSW, Australia, said: 'We would argue that this is a very significant step forward. It shows that the Kebara 2 hyoid doesn't just look like those of modern humans - it was used in a very similar way.'

"He told BBC News that it not only changed our understanding of Neanderthals, but also of ourselves.

"' Many would argue that our capacity for speech and language is among the most fundamental of characteristics that make us human. If Neanderthals also had language then they were truly human, too.' "

Ruggero D'Anastasio, Stephen Wroe et al, "Micro-Biomechanics of the Kebara 2 Hyoid and Its Implications for Speech in Neanderthals," Plos One, December 18, 2013, DOI: 10.1371/journal.pone.008226. The Abstract of the article:

"The description of a Neanderthal hyoid from Kebara Cave (Israel) in 1989 fuelled scientific debate on the evolution of speech and complex language. Gross anatomy of the Kebara 2 hyoid differs little from that of modern humans. However, whether Homo neanderthalensis could use speech or complex language remains controversial. Similarity in overall shape does not necessarily demonstrate that the Kebara 2 hyoid was used in the same way as that of Homo sapiens. The mechanical performance of whole bones is partly controlled by internal trabecular geometries, regulated by bone-remodelling in response to the forces applied. Here we show that the Neanderthal and modern human hyoids also present very similar internal architectures and micro-biomechanical behaviours. Our study incorporates detailed analysis of histology, meticulous reconstruction of musculature, and computational biomechanical analysis with models incorporating internal micro-geometry. Because internal architecture reflects the loadings to which a bone is routinely subjected, our findings are consistent with a capacity for speech in the Neanderthals." 

View Map + Bookmark Entry

The Denisova Hominin, a Third Kind of Human Circa 39,000 BCE

Molar found in Denisova Cave of the Altay Mountains in Southern Siberia. (Click on image to view larger.)

The Family Tree - Neanderthals and Denisovans were closely related. DNA comparisons suggest that our ancestors diverged from theirs some 500,000 years ago. (Click on image to view larger.)

 

 A Tale of Three Humans

A third kind of human, called Denisovans, seems to have coexisted in Asia with Neanderthals and early modern humans. The latter two are known from abundant fossils and artifacts. Denisovans are defined so far only by the DNA from one bone chip and two teeth—but it reveals a new twist to the human story.

Chip Clark, Smithsonian Institution.

On March 24, 2010 scientists announced the discovery of a finger bone fragment of an eight year old girl who lived about 41,000 years ago, found in the remote Denisova Cave in the Altai Mountains in Siberia, a cave which was also inhabited by Neanderthals and modern humans. Discovery of two teeth and a toe bone belonging to different members of the same population were later reported.These three objects are the only specimens from which the Denisova hominins are known. The average annual temperature of Denisova Cave remains at 0°C (32°F), a factor which contributed to the preservation of archaic DNA among the diverse prehistoric remains discovered, in addition to the Denisova hominin remains. 

Using a new technique for sequencing ancient DNA from bone, in August 2012 scientists from the Max Planck Institute reconstructed the genome of the Denisova hominins and announced that they were a new species, that they interbred with our species, and that the DNA results suggest that they had dark hari, eyes, and skin.  

"Analysis of the mtDNA of the finger bone showed it to be genetically distinct from the mtDNAs of Neanderthals and modern humans [Katsnelson 2010]. However, subsequent study of the genome from this specimen suggests this group shares a common origin with Neanderthals. They ranged from Siberia to Southeast Asia, and they lived among and interbred with the ancestors of some present-day modern humans, with up to 6% of the DNA of Melanesians and Australian Aboriginies deriving from Denisovans.

"It was in 2008 when Russian archaeologists discovered the finger bone fragment, and nick-named it 'X Woman'. Artifacts, including a bracelet, excavated in the cave at the same level were carbon dated to approximately 40,000 BP.

"A team of scientists led by Johannes Krause and Svante Paabo from the Max Planck Institute in Germany sequenced mtDNA from the fragment. The analysis indicated that modern humans, Neanderthals and the Denisova hominin last shared a common ancestor around 1 million years ago [Katsnelson 2004].

"The mtDNA analysis further suggested this new hominin species was the result of an early migration out of Africa, distinct from the later out-of-Africa migrations associated with Neanderthals and modern humans. Some argue it may be a relic of the earlier African exodus of Homo erectus, because of the tooth size, although this has not been proved. The conclusions of both the excavations and the sequencing are still debatable because the evidence shows that the Denisova Cave has been occupied by all three human forms" (http://www.bradshawfoundation.com/origins/denisova_hominin.php, accessed 07-07-2013).

For images and a very readable account of these discoveries see "The Case of the Missing Ancestor," nationalgeographic.com, July, 2013.

 

View Map + Bookmark Entry

Neanderthal Genome Reveals Interbreeding with Humans Circa 36,000 BCE

Svante Pääbo

In May 2010 paleogeneticist Svante Pääbo and colleagues at the Max Planck Institute for Evolutionary Anthropology in Leipzig published a draft genome sequence of DNA obtained from Neanderthal bones recovered from Vindija Cave that were around 38,000 years old. Neanderthal fossils found in this cave near the city of VaraždinCroatia, are among the best preserved in the world.

In their preliminary draft of the Neanderthal genome announced in February 2009 the scientists indicated that

"Previous mitochondrial analysis of Neanderthal DNA has uncovered no sign that Neanderthals and humans interbred sufficiently to leave a trace. A preliminary analysis across the new genome seems to confirm this conclusion, but more sequence data could overturn this conclusion" (http://www.newscientist.com/article/dn16587-first-draft-of-neanderthal-genome-is-unveiled.html#.UnKcfFCsim4. accessed 10-31-2013). 

However, comparison in 2010 of the full Neanderthal sequence with that of modern humans suggested that there was some interbreeding between Homo neanderthalensis and Homo sapiens.

"Bone contains DNA that survives long after an animal dies. Over time, though, strands of DNA break up, and microbes with their own DNA invade the bone. Pääbo's team found ways around both problems with 38,000 and 44,000-year-old bones recovered in Croatia: they used a DNA sequencing machine that rapidly decodes short strands and came up with ways to get rid of the microbial contamination.

"They ended up with short stretches of DNA code that computers stitched into a more complete sequence. This process isn't perfect: Pääbo's team decoded about 5.3 billion letters of Neanderthal DNA, but much of this is duplicates, because – assuming it's the same size as the human genome – the actual Neanderthal genome is only about 3 billion letters long. More than a third of the genome remains unsequenced. . . .

"Any human whose ancestral group developed outside Africa has a little Neanderthal in them – between 1 and 4 per cent of their genome, Pääbo's team estimates. In other words, humans and Neanderthals had sex and had hybrid offspring. A small amount of that genetic mingling survives in "non-Africans" today: Neanderthals didn't live in Africa, which is why sub-Saharan African populations have no trace of Neanderthal DNA" (http://www.newscientist.com/article/dn18869-neanderthal-genome-reveals-interbreeding-with-humans.html#.UnKfSFCsim4, accessed 10-31-2013).

View Map + Bookmark Entry

1800 – 1850

The First Full-Length Exposition in English of an Evolutionary Theory of Biology is Published Anonymously 1844

In 1844 the anonymous author of Vestiges of the Natural History of Creation provided the first full-length exposition in English of an evolutionary theory of biology; it was the most sensational book on its subject to appear prior to Darwin’s On the Origin of Species. By stating the case for evolution in a manner comprehensible to the general public, if not acceptable to the scientific community, the book absorbed the worst of the general public opposition to the concept, thus helping to prepare the way for the Origin. Vestiges was one of the greatest scientific best-sellers of the Victorian age, going through at least twelve large editions in England, numerous American editions, and several foreign-language translations. Remarkably, the identity of its author, the Scottish publisher, writer, and geologist Robert Chambers, was kept secret throughout his lifetime, and only divulged after Chambers's death in 1871. Secrecy of authorship undoubtedly contributed to the sensationalism surrounding the work.

Vestiges also played a significant role in transmitting some of Charles Babbage’s pioneering ideas on programming and coding mathematical operations. Babbage, in his Ninth Bridgewater Treatise (1837), had likened the Creator to a kind of master computer programmer (although this term did not exist in Babbage’s time), and the operations of the universe to a gigantic program whose myriad changes over time had been set up from the very beginning. Babbage’s ideas were alien to most of the Victorian public, since virtually no one in Babbage’s time was accustomed to thinking in terms of a programmed series of mathematical operations. However, Babbage’s ideas about natural laws resembling “programs” received a much wider audience through the Vestiges. The thirteenth chapter of Vestiges, entitled “Hypothesis of the development of the vegetable and animal kingdoms,” is devoted to the question of how the earth’s most complex organisms could have evolved from its simplest, given the observed fact that “like begets like.” On pages 206-211 of the 1844 edition, Chambers showed that evolutionary change occurring over long periods of time could be seen as similar to the workings of Babbage’s Difference Engine, programmed from the beginning of its operation to produce in sequence several different series of numbers according to a succession of mathematical rules. This is one of the very earliest references to computing within the context of biology.

"During the whole time which we call the historical era, the limits of species have been, to ordinary observation, rigidly adhered to. But the historical era is, as we know, only a small portion of the entire age of our globe. We do not know what may have happened during the ages which preceded its commencement, as we do not know what may happen in ages yet in the distant future. All, therefore, that we can properly infer from the apparently inevitable production of like by like is, that such is the ordinary procedure of nature in the time immediately passing before our eyes. Mr. Babbage’s illustration powerfully suggests that this ordinary procedure may be subordinate to a higher law which only permits it for a time, and in proper seasons interrupts and changes it" (Chambers 1844, 211).

Hook & Norman, Origins of Cyberspace (2002) no. 55.

J. Norman (ed) Morton's Medical Bibliography 5th ed (1991) no. 218.

View Map + Bookmark Entry

1850 – 1875

William Farr Publishes the First Instances of a Printing Calculator Used to Do Original Work 1857 – 1864

In 1859 English statistician and epidemiologist William Farr published "On the Construction of  Life-Tables, Illustrated by a New Life-Table of the Healthy Districts of England," Philosophical Transactions 149, pt. 2 (1859) 837-78. This was the first report describing the use of the Scheutz Engine no. 3 to prepare life tables, and it included a table calculated and typeset by the calculator. Farr, a pioneer in the quantitative study of morbidity and mortality, was chief statistician of the General Register Office, England's central statistical office. Influenced by Charles Babbage, he had long been interested in the use of a calculating machine such as Babbage's Difference Engine No. 1 to compute life tables. On page 854 of his paper Farr referred to his 1843 letter on this subject to the registrar-general. Farr had seen and tested the machine's predecessor, the Scheutz Engine no. 2, when it was on display in London. It was at Farr's recommendation that the British government authorized in 1857 the sum of £1200 for the Scheutz Engine no. 3 to be constructed by the firm of Bryan Donkin, a manufacturer of machinery, including those for the color printing of bank notes and stamps. Costs overran and Donkin delivered the machine in July 1859, several weeks past the deadline, at a loss of £615 (Lindgren 1987, 224-25). Farr's preliminary report, received by the Royal Society on March 17 of 1859, was written while the Scheutz Engine no. 3 was still "in the course of construction by the Messrs. Donkin" (p. 854). The report's table B1, "Life-Table of Healthy English Districts," made from stereotype plates produced by the calculator, represents the very first application of a difference engine to medical statistics.

Prior to their production of their Difference Engine No. 3, in 1857 the Scheutz brothers had brought the Scheutz Engine no. 2 from Sweden to London, where it was used to produce Specimens of Tables, Calculated, Stereomoulded, and Printed by Machinery. (London, 1857. These were the first mathematical tables calculated and typeset by a mechanical calculator. 

The Scheutz Difference Engine No. 2 was purchased in 1857 by the Dudley Observatory in Albany, New York. The following year the observatory used the machine in the computation of tables for the planet Mars; however, these were experimental and probably never printed on paper (Lindgren 1978, 211). The Scheutzes, Farr, and the Dudley Observatory were the first to use the Scheutz calculator in a scientific context.

In 1864 Farr published English Life Table. Tables of Lifetimes, annuities, and premiums. . . . Published by authority of the Registrar-General of births, deaths and marriages in England. The colophon leaf of this book indicated that 500 copies were printed. Farr's English Life Table contained, what was for its time, a tremendous amount of data— 6.5 million deaths sorted by age. Included in English Life Table no. 3 were the first lengthy working tables produced by the Scheutz printing calculator— the first instance of such a machine being used extensively to do original work. However, none of the hoped-for benefits of mechanizing the calculation of the tables were realized, since the Scheutz machine failed to include any of Babbage's security mechanisms to guard against mechanical error, and it required constant maintenance.

The machine did accomplish some of the typesetting which it stamped into stereotype plates; however, the process was so problematic that there was little cost savings from automation. Of the 600 pages of printed tables in the book, only 28 pages were composed entirely by the machine; a further 216 pages were partially composed by the machine, and the rest were typeset by hand. Nor was there the hoped-for savings from using the machine to prepare stereotype plates. Her Majesty's Stationery Office, printer of the volume, stated that having the machine set the entire book automatically would have saved only 10 percent over the cost of conventional typesetting (Swade, The Cogwheel Brain [2000] 203-8).

Pages cxxxix-cxliv contained Farr's appendix entitled "Scheutz's calculating machine and its use in the construction of the English life table no. 3," in which he emphasized the usefulness of the new machine, but also the delicacy and skill necessary for its operation:

The Machine required incessant attention. The differences had to be inserted at the proper terms of the various series, checking was required, and when the mechanism got out of order it had to be set right. Of the first watch nothing is known, but the first steam-engine was indisputably imperfect; and here we had to do with the second Calculating Machine as it came from the designs of its constructors and from the workshop of the engineer. The idea had been as beautifully embodied in metal by Mr. Bryan Donkin as it had been conceived by the genius of its inventors; but it was untried. So its work had to be watched with anxiety, and its arithmetical music had to be elicited by frequent tuning and skilful handling, in the quiet most congenial to such productions.

This volume is the result; and thus—if I may use the expression—the soul of the Machine is exhibited in a series of Tables which are submitted to the criticism of the consummate judges of this kind of work in England and in the world (p. cxl)

Farr also noted Babbage's contribution to the venture—it was Babbage who "explained the principles [of the Scheutz calculator] and first demonstrated the practicability of performing certain calculations, and printing the results by machinery" (p. xiii).

Having invested so much time and money in the project while realizing only token gains, the British government showed little patience with the Scheutz calculating machine. The General Register Office soon reverted to manual calculations by human computers employing logarithms, which they used until the GRO's conversion to mechanical calculation methods in 1911.  

Hook & Norman, Origins of Cyberspace (2002) Nos. 77 & 85.

(This entry was last revised on 01-14-2015.)

View Map + Bookmark Entry

Having Refused to Support Babbage, the British Government Pays for a Difference Engine Produced in Sweden April 7, 1859

Long after refusing to fund the completion of Babbage’s Difference Engine No. 1, and long after refusing to fund construction of his Analytical Engine, the British government paid for the construction of the Scheutzes' third difference engine. In 1859 medical statistician William Farr first used the machine to calculate and set type for a table for Farr's paper, published in Philosophical Transactions, “On the Construction of Life-Tables, Illustrated by a New Life-Table of the Healthy Districts of England.”  Farr read this paper to the Royal Society on April 7, 1859.

View Map + Bookmark Entry

1910 – 1920

The Basis for Computed Tomography 1917

In 1917 Austrian mathematician Johann Radon, professor at Technische Universität Wien, introduced the Radon transform. He also demonstrated that the image of a three-dimensional object can be constructed from an infinite number of two-dimensional images of the object.

About sixty-five years later Radon's work was applied in the invention of computed tomography.

View Map + Bookmark Entry

1940 – 1950

The First Application of Electric Punched Card Tabulating Equipment in Crystal Structure Analysis 1941 – 1946

At the suggestion of Wallace J. Eckert of Columbia University, physical chemist Linus Pauling and associates at Caltech used IBM electric punch card tabulating equipment to speed up the Fourier calculations in crystal structure analysis in their researches. The first paper resulting from these applications was David E. Hughes, "The Crystal Structure of Melamine," J. Amer. Chem. Soc. 63 (1941) 1737-52. 

Prior to this Leslie J. Comrie had attempted to introduce IBM Hollerith electric punched card tabulating to speed up Fourier calculations in crystal structure analysis in England, but the method did not gain acceptance.

Applications of IBM equipment in crystallographic research continued at Caltech but the method was not published until 1946: Shaffer, Philip. A., Jr.; Schomaker, Verner; and Pauling, Linus  The use of punched cards in molecular structure determinations. I. Crystal structure calculations [II. Electron diffraction calculations], Journal of Chemical Physics 14 (1946) 648–658, 659–664.  The offprint version of the first paper contained a 10-page supplement with 5 full-age diagrams.

"Shaffer, Schomaker, and Pauling developed methods of carrying out Fourier calculations on IBM punched-card machines, using a Type 11 electric keypunch, a Type 80 electric sorting machine, and a Type 405 alphabetic direct-subtraction tabulating machine. This paper cites work as early as 1941 performed on the structure of various less-complex organic crystals using electric tabulation methods.

"The supplement to Part I of this paper, which was included only in the offprint version, provided additional information on card design, plugboard wiring and operating procedures. 'The time factor is in all cases greatly in favor of the punched-card method relative to summation procedures used in the past. Fourier projections which by the Beevers-Lipson method required several days of calculation can now be made in 5 to 7 hours. At the same time the density of calculated points is much greater and the accuracy of the computation is assured. The machine steps in the least-squares calculations require only a few hours, as compared to one or two days with use of an adding machine, and again the accuracy of the work is assured. With the use of parameter cards and the structure-factor files the calculation of structure factors can be accomplished in about one-eighth of the time previously required.' (p. 658). Most of the detail in the technique of data processing, including information on card design, plugboard wiring, and operating procedures appears in the supplement" (Hook & Norman, Origins of Cyberspace [2002] no. 879).

Cranswick, "Busting out of crystallography’s Sisyphean prison: from pencil and paper to structure solving at the press of a button: past, present and future of crystallographic software development, maintenance and distribution," Acta Crystallographica Section A Foundations of Crystallography A64 (2008) 65-87. (Accessed 04-20-2010).

View Map + Bookmark Entry

McCulloch & Pitts Publish the First Mathematical Model of a Neural Network 1943

In 1943 American neurophysiologist and cybernetician of the University of Illinois at Chicago Warren McCulloch and self-taught logician and cognitive psychologist Walter Pitts published “A Logical Calculus of the ideas Imminent in Nervous Activity,” describing the McCulloch - Pitts neuron, the first mathematical model of a neural network.

Building on ideas in Alan Turing’s “On Computable Numbers”, McCulloch and Pitts's paper provided a way to describe brain functions in abstract terms, and showed that simple elements connected in a neural network can have immense computational power. The paper received little attention until its ideas were applied by John von Neumann, Norbert Wiener, and others.

View Map + Bookmark Entry

Norbert Wiener Issues "Cybernetics", the First Widely Distributed Book on Electronic Computing 1948

"Use the word 'cybernetics', Norbert, because nobody knows what it means. This will always put you at an advantage in arguments."

— Widely quoted: attributed to Claude Shannon in a letter to Norbert Wiener in the 1940s.

 In 1948 mathematician Norbert Wiener at MIT published Cybernetics or Control and Communication in the Animal and the Machine, a widely circulated and influential book that applied theories of information and communication to both biological systems and machines. Computer-related words with the “cyber” prefix, including "cyberspace," originate from Wiener’s book. Cybernetics was also the first conventionally published book to discuss electronic digital computing. Writing as a mathematician rather than an engineer, Wiener’s discussion was theoretical rather than specific. Strangely the first edition of the book was published in English in Paris at the press of Hermann et Cie. The first American edition was printed offset from the French sheets and issued by John Wiley in New York, also in 1948. I have never seen an edition printed or published in England. 

Independently of Claude Shannon, Wiener conceived of communications engineering as a brand of statistical physics and applied this viewpoint to the concept of information. Wiener's chapter on "Time series, information, and communication" contained the first publication of Wiener's formula describing the probability density of continuous information. This was remarkably close to Shannon's formula dealing with discrete time published in A Mathematical Theory of Communication (1948). Cybernetics also contained a chapter on "Computing machines and the nervous system." This was a theoretical discussion, influenced by McCulloch and Pitts, of differences and similarities between information processing in the electronic computer and the human brain. It contained a discussion of the difference between human memory and the different computer memories then available. Tacked on at the end of Cybernetics were speculations by Wiener about building a chess-playing computer, predating Shannon's first paper on the topic.

Cybernetics is a peculiar, rambling blend of popular and highly technical writing, ranging from history to philosophy, to mathematics, to information and communication theory, to computer science, and to biology. Reflecting the amazingly wide range of the author's interests, it represented an interdisciplinary approach to information systems both in biology and machines. It influenced a generation of scientists working in a wide range of disciplines. In it were the roots of various elements of computer science, which by the mid-1950s had broken off from cybernetics to form their own specialties. Among these separate disciplines were information theory, computer learning, and artificial intelligence.

It is probable that Wiley had Hermann et Cie supervise the typesetting because they specialized in books on mathematics.  Hermann printed the first edition by letterpress; the American edition was printed offset from the French sheets. Perhaps because the typesetting was done in France Wiener did not have the opportunity to read proofs carefully, as the first edition contained many typographical errors which were repeated in the American edition, and which remained uncorrected through the various printings of the American edition until a second edition was finally published by John Wiley and MIT Press in 1961. 

Though the book contained a lot of technical mathematics, and was not written for a popular audience, the first American edition went through at least 5 printings during 1948,  and several later printings, most of which were probably not read in their entirety by purchasers. Sales of Wiener's book were helped by reviews in wide circulation journals such as the review in TIME Magazine on December 27, 1948, entitled "In Man's Image." The reviewer used the word calculator to describe the machines; at this time the word computer was reserved for humans.

"Some modern calculators 'remember' by means of electrical impulses circulating for long periods around closed circuits. One kind of human memory is believed to depend on a similar system: groups of neurons connected in rings. The memory impulses go round & round and are called upon when needed. Some calculators use 'scanning' as in television. So does the brain. In place of the beam of electrons which scans a television tube, many physiologists believe, the brain has 'alpha waves': electrical surges, ten per second, which question the circulating memories.

"By copying the human brain, says Professor Wiener, man is learning how to build better calculating machines. And the more he learns about calculators, the better he understands the brain. The cyberneticists are like explorers pushing into a new country and finding that nature, by constructing the human brain, pioneered there before them.

"Psychotic Calculators. If calculators are like human brains, do they ever go insane? Indeed they do, says Professor Wiener. Certain forms of insanity in the brain are believed to be caused by circulating memories which have got out of hand. Memory impulses (of worry or fear) go round & round, refusing to be suppressed. They invade other neuron circuits and eventually occupy so much nerve tissue that the brain, absorbed in its worry, can think of nothing else.

"The more complicated calculating machines, says Professor Wiener, do this too. An electrical impulse, instead of going to its proper destination and quieting down dutifully, starts circulating lawlessly. It invades distant parts of the mechanism and sets the whole mass of electronic neurons moving in wild oscillations" (http://www.time.com/time/magazine/article/0,9171,886484-2,00.html, accessed 03-05-2009).

Presumably the commercial success of Cybernetics encouraged Wiley to publish Berkeley's Giant Brains, or Machines that Think in 1949.

♦ In October 2012 I offered for sale the copy of the first American printing of Cybernetics that Wiener inscribed to Jerry Wiesner, the head of the laboratory at MIT where Wiener conducted his research. This was the first inscribed copy of the first edition (either the French or American first) that I had ever seen on the market, though the occasional signed copy of the American edition did turn up. Having read our catalogue description of that item, my colleague Arthur Freeman emailed me this story pertinent to Wiener's habit of not inscribing books:

"Norbert, whom I grew up nearby (he visited our converted barn in Belmont, Mass., constantly to play frantic theoretical blackboard math with my father, an economist/statistician at MIT, which my mother, herself a bit better at pure math, would have to explain to him later), was a notorious cheapskate. His wife once persuaded him to invite some colleagues out for a beer at the Oxford Grill in Harvard Square, which he did, and after a fifteen-minute sipping session, he got up to go, and solemnly collected one dime each from each of his guests. So when *Cybernetics* appeared on the shelves of the Harvard Coop Bookstore, my father was surprised and flattered that Norbert wanted him to have an inscribed copy, and together they went to Coop, where Norbert duly picked one out, wrote in it, and carried it to the check-out counter--where he ceremoniously handed it over to my father to pay for. This was a great topic of family folklore. I wonder if Jerry Wiesner paid for his copy too?"

View Map + Bookmark Entry

Comparing the Functions of Genes to Self-Reproducing Automata September 20, 1948

At the Hixon Symposium in Pasadena, California on September 20, 1948 John von Neumann spoke on The General and Logical Theory of Automata. Within this speech von Neumann compared the functions of genes to self-reproducing automata. This was the first of a series of five works (some posthumous) in which von Neumann attempted to develop a precise mathematical theory allowing comparison of computers and the human brain.

“For instance, it is quite clear that the instruction I is roughly effecting the functions of a gene. It is also clear that the copying mechanism B performs the fundamental act of reproduction, the duplication of the genetic material, which is clearly the fundamental operation in the multiplication of living cells. It is also easy to see how arbitrary alterations of the system E, and in particular of I, can exhibit certain typical traits which appear in connection with mutation, which is lethality as a rule, but with a possibility of continuing reproduction with a modification of traits.” (pp. 30-31).

Molecular biologist Sydney Brenner read this brief discussion of the gene within the context of information in the proceedings of the Hixon Symposium, published in 1951. Later he wrote about in his autobiography:

“The brilliant part of this paper in the Hixon Symposium is his description of what it takes to make a self-reproducing machine. Von Neumann shows that you have to have a mechanism not only of copying the machine, but of copying the information that specifies the machine. So he divided the machine--the automaton as he called it--into three components; the functional part of the automaton, a decoding section which actually takes a tape, reads the instructions and builds the automaton; and a device that takes a copy of this tape and inserts it into the new automaton. . . . I think that because of the cultural differences between most biologists on the one hand, and physicists and mathematicians on the other, it had absolutely no impact at all. Of course I wasn’t smart enough to really see then that this is what DNA and the genetic code was all about. And it is one of the ironies of this entire field that were you to write a history of ideas in the whole of DNA, simply from the documented information as it exists in the literature--that is, a kind of Hegelian history of ideas--you would certainly say that Watson and Crick depended upon von Neumann, because von Neumann essentially tells you how it’s done. But of course no one knew anything about the other. It’s a great paradox to me that in fact this connection was not seen” (Brenner, My Life, 33-36).

View Map + Bookmark Entry

1950 – 1960

The First Application of an Electronic Computer to Molecular or Structural Biology July 9 – July 12, 1951

At the second English computer conference held in Manchester from July 9-12, 1951 computer scientist John Makepiece Bennett and biochemist and crystallographer John Kendrew described their use of the Cambridge EDSAC for the computation of Fourier syntheses in the calculation of structure factors of the protein molecule myoglobin. This was the first application of an electronic computer to computational biology or structural biology. The first published account of this research appeared in the very scarce Manchester University Computer Conference Proceedings (1951). 

Kendrew and Bennett formally published an extended version of their paper as "The Computation of Fourier Syntheses with a Digital Electric Calculating Machine," Acta Crystallographica 5 (1952) 109-116. 

In 1962 Kendrew received the Nobel Prize in chemistry for his discovery of the 3-dimensional molecular structure of myoglobin, the first protein molecule to be "solved."

Hook & Norman, Origins of Cyberspace (2002) nos. 744 & 745.

View Map + Bookmark Entry

Intelligence Amplification by Machines 1956

In 1956 English psychiatrist and cybernetician W[illiam] Ross Ashby wrote of intelligence amplification by machines in his book, An Introduction to Cybernetics.

View Map + Bookmark Entry

Changes in Tissue Density Can be Computed 1956 – 1964

In work initiated at the University of Cape Town and Groote Schuur Hospital in early 1956, and continued briefly in mid-1957, South African-born American physicist Allen M. Cormack showed that changes in tissue density could be computed from x-ray data. His results were subsequently published in two papers:

"Representation of a Function by its Line Integrals, with Some Radiological Applications," Journal of Applied Physics 34 (1963) 2722-27; "Representation of a Function by its Line Integrals, with Some Radiological Applications. II," Journal of Applied Physics 35 (1964) 2908-13.  

Because of limitations in computing power no machine was constructed during the 1960s. Cormack's papers generated little interest until Godfrey Hounsfield and colleagues invented computed tomography, and built the first CT scanner in 1971, creating a real application of Cormack's theories.

View Map + Bookmark Entry

First International Congress on Cybernetics June 26 – June 29, 1956

From June 26-29, 1956 the First International Congress on Cybernetics was held in Namur, Belgium. Few, if any, of the computer pioneers attended. By this time the field of cybernetics was separated from those of computing and artificial intelligence to emphasize issues of control and communication in learning, automation, and biology.

View Map + Bookmark Entry

John Kendrew Reports the First Solution of the Three-Dimensional Molecular Structure of a Protein 1958 – 1960

In 1958 and 1960 molecular biologist John Kendrew published  "A Three-Dimensional Model of the Myoglobin Molecule Obtained by X-ray Analysis" (with G. Bodo, H. M. Dintzis, R. G. Parrish, H. Wyckoff,) Nature 181 (1958) 662-666, and "Structure of Myoglobin: A Three-Dimensional Fourier synthesis at 2 Å Resolution" (with R. E. Dickerson, B. E. Strandberg, R. G. Hart, D. R. Davies, D. C. Phillips, V. C. Shore). Nature 185 (1960) 422-27. These papers reported the first solution of the three-dimensional molecular structure of a protein, for which Kendrew received the 1962 Nobel Prize in chemistry, together with his friend and colleague Max Perutz, who solved the structure of the related and more complex protein, hemoglobin, two years after Kendrew’s achievement. 

Kendrew began his investigation into the structure of myoglobin in 1949, choosing this particular protein because it was “of low molecular weight, easily prepared in quantity, readily crystallized, and not already being studied by X-ray methods elsewhere” (Kendrew, “Myoglobin and the structure of proteins. Nobel Prize Lecture [1962],” pp. 676-677). Protein molecules, which contain, at minimum, thousands of atoms, have enormously convoluted and irregular formations that are extremely difficult to elucidate. In the 1930s J. D. Bernal, Dorothy Hodgkin and Max Perutz performed the earliest crystallographic studies of proteins at Cambridge’s Cavendish Laboratory; however, the intricacies of three-dimensional structure of proteins were too complex for analysis by conventional X-ray crystallography, and the process of calculating the structure factors by slide-rules and electric calculators was far too slow. It was not until the late 1940s, when Kendrew joined the Cavendish Laboratory as a graduate student, that new and more sophisticated tools emerged that could be used to attack the problem. The first of these tools was the technique of isomorphous replacement, developed by Perutz during his own researches on hemoglobin, in which certain atoms in a protein molecule are replaced with heavy atoms. When these modified molecules are subjected to X-ray analysis the heavy atoms provide a frame of reference for comparing diffraction patterns. The second tool was the electronic computer, which Kendrew introduced to computational biology in 1951. The first electronic computer, the ENIAC, which became operational in Philadelphia in 1945, was 10,000 times the speed of a human performing a calculation. In 1951 Cambridge University was one of only three or four places in the world with a high-speed stored-program electronic computer, and Kendrew took full advantage of the speed of Cambridge’s EDSAC computer, and its more powerful successors, to execute the complex mathematical calculations required to solve the structure of myoglobin. Kendrew was the first to apply an electronic computer to the solution of a complex problem in biology.

Nevertheless, even with the EDSAC computer performing the calculations, the research progressed remarkably slowly. Only by the summer of 1957 did Kendrew and his team succeed in creating a three-dimensional map of myoglobin at a resolution the so-called “low resolution”of 6 angstroms; thus myoglobin became “the first protein to be solved” (Judson, p. 538).

“A cursory inspection of the map showed it to consist of a large number of rod-like segments, joined at the ends, and irregularly wandering through the structure; a single dense flattened disk in each molecule; and sundry connected regions of uniform density. These could be identified respectively with polypeptide chains, with the iron atom and its associated porphyrin ring, and with the liquid filling the interstices between neighboring molecules. From the map it was possible to ‘dissect out’ a single protein molecule . . . The most striking features of the molecule were its irregularity and its total lack of symmetry” (Kendrew, “Myoglobin,” p. 681).  

The 6-angstrom resolution was too low to show the molecule’s finer features, but by 1960 Kendrew and his team were able to obtain a map of the molecule at 2-angstrom resolution. “To achieve a resolution of 2 Å it was necessary to determine the phases of nearly 10,000 reflections, and them to compute a Fourier synthesis with the same number of terms . . . the Fourier synthesis itself (excluding preparatory computations of considerable bulk and complexity) required about 12 hours of continuous computation on a very fast machine (EDSAC II)” (Kendrew, “Myoglobin,” p. 682).

View Map + Bookmark Entry

The Inspiration for Artificial Neural Networks, Building Blocks of Deep Learning 1959

In 1959 Harvard neurophysiologists David H. Hubel and Torsten Wiesel, inserted a microelectrode into the primary visual cortex of an anesthetized cat. They then projected patterns of light and dark on a screen in front of the cat, and found that some neurons fired rapidly when presented with lines at one angle, while others responded best to another angle. They called these neurons "simple cells." Still other neurons, which they termed "complex cells," responded best to lines of a certain angle moving in one direction. These studies showed how the visual system builds an image from simple stimuli into more complex representations. Many artificial neural networks, fundamental components of deep learning, may be viewed as cascading models of cell types inspired by Hubel and Wiesel's observations.

For two later contributions Hubel and Wiesel shared the 1981 Nobel Prize in Physiologist or Medicine with Roger W. Sperry.

". . . firstly, their work on development of the visual system, which involved a description of ocular dominance columns in the 1960s and 1970s; and secondly, their work establishing a foundation for visual neurophysiology, describing how signals from the eye are processed by the brain to generate edge detectors, motion detectors, stereoscopic depth detectors and color detectors, building blocks of the visual scene. By depriving kittens from using one eye, they showed that columns in the primary visual cortex receiving inputs from the other eye took over the areas that would normally receive input from the deprived eye. This has important implications for the understanding of deprivation amblyopia, a type of visual loss due to unilateral visual deprivation during the so-called critical period. These kittens also did not develop areas receiving input from both eyes, a feature needed for binocular vision. Hubel and Wiesel's experiments showed that the ocular dominance develops irreversibly early in childhood development. These studies opened the door for the understanding and treatment of childhood  cataracts  and strabismus. They were also important in the study of cortical plasticity.

"Furthermore, the understanding of sensory processing in animals served as inspiration for the SIFT descriptor (Lowe, 1999), which is a local feature used in computer vision for tasks such as object recognition and wide-baseline matching, etc. The SIFT descriptor is arguably the most widely used feature type for these tasks" (Wikipedia article on David H. Hubel, accessed 11-10-2014). 

View Map + Bookmark Entry

The Beginning of Expert Systems for Medical Diagnosis July 3, 1959

"Reasoning Foundations of Medical Diagnosis," by Robert S. Ledley and Lee B. Lusted published in Science, 130, No. 3366, 9-21, on July 3, 1959 represented the beginning of the development of clinical decision support systems (CDSS) — interactive computer programs, or expert systems, designed to assist physicians and other health professionals with decision making tasks.

"Areas covered included: symbolic logicBayes’ theorem (probability), and value theory. In the article, physicians were instructed how to create diagnostic databases using edge-notched cards to prepare for a time when they would have the opportunity to enter their data into electronic computers for analysis. Ledley and Lusted expressed hope that by harnessing computers, much of physicians’ work would become automated and that many human errors could therefore be avoided.

"Within medicine, Ledley and Lusted’s article has remained influential for decades, especially within the field of medical decision making. Among its most enthusiastic readers was cardiologist Homer R. Warner, who emulated Ledley and Lusted’s methods at his research clinic at LDS Hospital in Utah. Warner’s work, in turn, shaped many of the practices and priorities of the heavily computerized Intermountain Healthcare, Inc., which was in 2009 portrayed by the Obama administration as an exemplary model of a healthcare system that provided high-quality and low-cost care.

"The article also brought national media attention to Ledley and Lusted’s work. Articles about the work of the two men ran in several major US newspapers. A small demonstration device Ledley built to show how electronic diagnosis would work was described in the New York World Telegram as a “A Metal Brain for Diagnosis,” while the New York Post ran a headline: “Dr. Univac Wanted in Surgery.” On several occasions, Ledley and Lusted explained to journalists that they believed that computers would aid physicians rather than replace them, and that the process of introducing computers to medicine would be very challenging due to the non-quantitative nature of much medical information. They also envisioned, years before the development of ARPANET, a national network of medical computers that would allow healthcare providers to create a nationally-accessible medical record for each American and would allow rapid mass data analysis as information was gathered by individual clinics and sent to regional and national computer centers" (Wikipedia article on Robert Ledley, accessed 05-03-2014.)

(This entry was last revised on 05-03-2014.)

View Map + Bookmark Entry

1960 – 1970

The First Symposium on Bionics September 13 – September 15, 1960

From September 13-15, 1960 the first symposium on bionics (biological electronics) took place at Wright-Patterson Air Force Base in Ohio. (See Reading 11.7.)

View Map + Bookmark Entry

Joseph Weizenbaum Writes ELIZA: A Pioneering Experiment in Artificial Intelligence Programming 1964 – 1966

Between 1964 and 1966 German and American computer scientist Joseph Weizenbaum at MIT wrote the computer program ELIZA. This program, named after the ingenue in George Bernard Shaw's play Pygmalion, was an early example of primitive natural language processing. The program operated by processing users' responses to scripts, the most famous of which was DOCTOR, which was capable of engaging humans in a conversation which bore a striking resemblance to one with an empathic psychologist. Weizenbaum modeled its conversational style after Carl Rogers, who introduced the use of open-ended questions to encourage patients to communicate more effectively with therapists. The program applied pattern matching rules to statements to figure out its replies. Using almost no information about human thought or emotion, DOCTOR sometimes provided a startlingly human-like interaction.

"When the "patient" exceeded the very small knowledge base, DOCTOR might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?" A possible response to "My mother hates me" would be "Who else in your family hates you?" ELIZA was implemented using simple pattern matching techniques, but was taken seriously by several of its users, even after Weizenbaum explained to them how it worked. It was one of the first chatterbots in existence" (Wikipedia article on ELIZA, accessed 06-15-2014).

"Weizenbaum was shocked that his program was taken seriously by many users, who would open their hearts to it. He started to think philosophically about the implications of artificial intelligence and later became one of its leading critics.

"His influential 1976 book Computer Power and Human Reason displays his ambivalence towards computer technology and lays out his case: while Artificial Intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. Choice, however, is the product of judgment, not calculation. It is the capacity to choose that ultimately makes us human. Comprehensive human judgment is able to include non-mathematical factors, such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for comparison" (Wikipedia article on Joseph Weizenbaum, accessed 06-15-2014).

View Map + Bookmark Entry

MEDLARS: The First Large Scale Computer-Based Retrospective Search Service Available to the General Public January 1964

In January 1964 Medical Literature Analysis and Retrieval System (MEDLARS) was operational at the National Library of Medicine, Bethesda, Maryland.

MEDLARS was the first large scale, computer-based, retrospective search service available to the general public.  It was also the first major machine-readable database and batch-retrieval system.

View Map + Bookmark Entry

Feigenbaum, Djerassi & Lederberg Develop DENDRAL the First Expert System 1965

In 1965 artificial intelligence researcher Edward Feigenbaum, chemist Carl Djerassi, and molecular biologist Joshua Lederberg, began their collaboration at Stanford University on Dendral, a long-term pioneer project in artificial intelligence that is considered the first computer software expert system

"In the early 1960s, Joshua Lederberg started working with computers and quickly became tremendously interested in creating interactive computers to help him in his exobiology research. Specifically, he was interested in designing computing systems that to help him study alien organic compounds. As he was not an expert in either chemistry or computer programming, he collaborated with Stanford chemist Carl Djerassi to help him with chemistry, and Edward Feigenbaum with programming, to automate the process of determining chemical structures from raw mass spectrometry data. Feigenbaum was an expert in programming languages and heuristics, and helped Lederberg design a system that replicated the way Carl Djerassi solved structure elucidation problems. They devised a system called Dendritic Algorithm (Dendral) that was able to generate possible chemical structures corresponding to the mass spectrometry data as an output" (Wikipedia article on Dendral, accessed 12-22-2013).

Lindsay, Buchanan, Feigenbaum, Lederberg, Applications of Artificial Intelligence for Organic Chemistry. The DENDRAL Project (1980).

View Map + Bookmark Entry

Aaron Klug Invents Digital Image Processing 1966

In 1966 English molecular biologist Aaron Klug at the University of Cambridge formulated a method for digital image processing of two-dimensional images.

A. Klug and D. J. de Rosier, “Optical filtering of electron micrographs: Reconstruction of one-sided images,” Nature 212 (1966): 29-32.

View Map + Bookmark Entry

Cyrus Levinthal Builds the First System for Interactive Display of Molecular Structures 1966

In 1966, using the Project MAC, an early time-sharing system at MIT, Cyrus Levinthal built the first system for the interactive display of molecular structures

"This program allowed the study of short-range interaction between atoms and the "online manipulation" of molecular structures. The display terminal (nicknamed Kluge) was a monochrome oscilloscope (figures 1 and 2), showing the structures in wireframe fashion (figures 3 and 4). Three-dimensional effect was achieved by having the structure rotate constantly on the screen. To compensate for any ambiguity as to the actual sense of the rotation, the rate of rotation could be controlled by globe-shaped device on which the user rested his/her hand (an ancestor of today's trackball). Technical details of this system were published in 1968 (Levinthal et al.). What could be the full potential of such a set-up was not completely settled at the time, but there was no doubt that it was paving the way for the future. Thus, this is the conclusion of Cyrus Levinthal's description of the system in Scientific American (p. 52):

It is too early to evaluate the usefulness of the man-computer combination in solving real problems of molecular biology. It does seems likely, however, that only with this combination can the investigator use his "chemical insight" in an effective way. We already know that we can use the computer to build and display models of large molecules and that this procedure can be very useful in helping us to understand how such molecules function. But it may still be a few years before we have learned just how useful it is for the investigator to be able to interact with the computer while the molecular model is being constructed.

"Shortly before his death in 1990, Cyrus Levinthal penned a short biographical account of his early work in molecular graphics. The text of this account can be found here."

In January 2014 two short films produced with the interactive molecular graphics and modeling system devised by Cyrus Levinthal and his collaborators in the mid-1960s was available at this link.

View Map + Bookmark Entry

Aaron Klug Invents Three-Dimensional Image Processing January 1968

In January 1968 English molecular biologist Aaron Klug described techniques for the reconstruction of three-dimensional structures from electron micrographs, thus founding the processing of three-dimensional digital images.

D. J. de Rosier and A. Klug, “Reconstruction of three dimensional structures from electron micrographs,” Nature 217 (1968) 130-34.

View Map + Bookmark Entry

1970 – 1980

Godfrey Hounsfield Invents Computed Tomography (CT) 1971

In 1971 English electrical engineer Godfrey Hounsfield at EMI's Central Research Laboratories in Hayes, Middlesex, invented computed tomography (CT), the first application of computers to medical imaging.

View Map + Bookmark Entry

PARRY: An Artificial Intelligence Program with "Attitude" 1972

PARRY, a computer program written in LISP in 1972 by American psychiatrist Kenneth Colby, then at Stanford University, attempted to simulate a paranoid schizophrenic. The program implemented a crude model of the behavior of a paranoid schizophrenic based on concepts, conceptualizations, and beliefs (judgments about conceptualizations: accept, reject, neutral). As it embodied a conversational strategy, it was more serious and advanced than Joseph Weizenbaum's ELIZA (1964-66). PARRY was described as "ELIZA with attitude".

"PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the 'patients; were human and which were computer programs. The psychiatrists were able to make the correct identification only 48 percent of the time — a figure consistent with random guessing.

"PARRY and ELIZA (also known as "the Doctor") 'met' several times.The most famous of these exchanges occurred at the ICCC 1972, where PARRY and ELIZA were hooked up over ARPANET and 'talked' to each other" (Wikipedia article on PARRY, accessed 06-15-2014).

View Map + Bookmark Entry

The First Patent for MRI March 17, 1972

On March 17, 1972 Armenian-American medical practitioner and inventor Raymond V. Damadian filed a patent for "An Apparatus and Method for Detecting Cancer in Tissue."

Damadian's patent 3,789,832 was granted on February 5, 1974. This was the first patent on the use of Nuclear Magnetic Resonance for scanning the human body, but it did not describe a method for generating pictures from such a scan, or precisely how such a scan might be achieved.

View Map + Bookmark Entry

The Beginnings of Magnetic Resonance Imaging 1973

In 1973 American chemist Paul Lauterbur, working at the State University of New York at Stony Brook, developed a way to generate the first Magnetic Resonance Images (MRI), in 2D and 3D, using gradients. Lauterbur described an imaging technique that removed the usual resolution limits due to the wavelength of the imaging field. He used

"two fields: one interacting with the object under investigation, the other restricting this interaction to a small region. Rotation of the fields relative to the object produces a series of one-dimensional projections of the interacting regions, from which two- or three-dimensional images of their spatial distribution can be reconstructed" (http://www.nature.com/physics/looking-back/lauterbur/index.html, accessed 11-23-2008).

This was the beginning of magnetic reasonance imaging.

"When Lauterbur first submitted his paper with his discoveries to Nature, the paper was rejected by the editors of the journal. Lauterbur persisted and requested them to review it again, upon which time it was published and is now acknowledged as a classic Nature paper.  The Nature editors pointed out that the pictures accompanying the paper were too fuzzy, although they were the first images to show the difference between heavy water and ordinary water. Lauterbur said of the initial rejection: 'You could write the entire history of science in the last 50 years in terms of papers rejected by Science or Nature' (Wikipedia article on Paul Lauterbur, accessed 03-08-2012).

Lauterbur, Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance, Nature 242 (1973), 190–191.

♦ Lauterbur's Nobel Lecture is available from the Nobel website. You can also watch a 65 minute video of Lauterbur delivering the lecture from this link.

View Map + Bookmark Entry

Robert Ledley Develops the First Whole-Body CT Scanner 1973

In 1973 American dentist and biophysicist Robert S. Ledley of Georgetown University and colleagues developed the ACTA 0100 CT Scanner (Automatic Computerized Traverse Axial)— the first whole-body computed tomography scanner

"This machine had 30 photomultiplier tubes as detectors and completed a scan in only 9 translate/rotate cycles, much faster than the EMI-scanner. It used a DEC PDP11/34 minicomputer both to operate the servo-mechanisms and to acquire and process the images. The Pfizer drug company acquired the prototype from the university, along with rights to manufacture it. Pfizer then began making copies of the prototype, calling it the "200FS" (FS meaning Fast Scan), which were selling as fast as they could make them. This unit produced images in a 256x256 matrix, with much better definition than the EMI-Scanner's 80" (Wikipedia article on Computed Tomography, accessed 04-15-2009).

Ledley R. S., Di Chiro G, Luessenhop A. J., Twigg H. L. "Computerized transaxial x-ray tomography of the human body," Science 186, No. 4160 (1974) 207-212.

View Map + Bookmark Entry

The Brain-Computer Interface 1973

In 1973 computer scientist Jacques J. Vidal of UCLA coined the term brain-computer interface (BCI) in his paper "Toward Direct Brain-Computer Communication," Annual Review of Biophysics and Bioengineering 2: 157–80. doi:10.1146/annurev.bb.02.060173.001105. PMID 4583653.

View Map + Bookmark Entry

The Code of Fair Information Practice July 1973

In July 1973 Records, Computers, and the Rights of Citizens was published. This was the report of the Advisory Committee on Automated Personal Data Systems appointed by Elliot L. Richardson, secretary of the Department of Health, Education and Welfare. The report explored the impact of computerized record keeping on individuals, and recommended a Code of Fair Information Prractice, consisting of five basic principles:

1."There must be no data record-keeping systems whose very existence is secret." 

2."There must be a way for an individual to find out what information about him is in a record and how it is used."

3."There must be a way for an individual to prevent information about him obtained for one purpose from being used or made available for other purposes without his consent." 

4. "There must be a way for an individual to correct or amend a record of identifiable information about him."

5. "Any organization creating, maintaining, using or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take reasonable precautions to prevent misuse of the data."

View Map + Bookmark Entry

The Sanger Method of Rapid DNA Sequencing 1977

In 1977 English biochemist Frederick Sanger and colleagues at the University of Cambridge independently developed a method for the rapid sequencing of long sections of DNA molecules. Sanger’s method, and that developed by Gilbert and Maxam, made it possible to read the nucleotide sequence for entire genes that run from 1000 to 30,000 bases long. Sanger sequencing was the most widely used sequencing method for approximately 25 years. 

In 1980 Sanger shared the 1980 Nobel Prize in Chemistry with Walter Gilbert and Paul Berg. Paul Berg received half of the price "for his fundamental studies of the biochemistry of nucleic acids, with particular regard to recombinant-DNA". The other half was split between Walter Gilbert and Frederick Sanger "for their contributions concerning the determination of base sequences in nucleic acids".  This was Sanger's second Nobel prize.

Sanger, F., Nicklen, S., and Coulson, A.R. "DNA Sequencing with Chain-Terminating Inhibitors," Proc. Nat. Acad. Sci. (USA) 74 (1977) 546-67.

View Map + Bookmark Entry

Making MRI Feasible 1977

In 1977 British physicist Peter Mansfield developed a mathematical technique that would allow NMR scans to take seconds rather than hours and produce clearer images than the technique Paul Lauterbur developed in 1973.

Mansfield showed how gradients in the magnetic field could be mathematically analysed, which made it possible to develop a useful nuclear magnetic resonance imaging technique. Mansfield also showed how extremely fast imaging could be achievable. This became technically possible a decade later.

P. Mansfield and A. A .Maudsley, "Medical imaging by NMR", Brit. J. Radiol. 50 (1977) 188.
P Mansfield, "Multi-planar imaging formation using NMR spin echoes," J. Physics C. Solid State Phys. 10 (1977) L55–L58.

The references are from Mansfield's Nobel Lecture. In December 2013 a 64 minute video of Mansfield delivering his lecture was available at this link.

View Map + Bookmark Entry

1980 – 1990

The First Whole Genome Shotgun Sequence 1982

In 1982 British biochemist Frederick Sanger and colleagues sequenced the entire genome of bacteriophage lambda using a random shotgun technique. This was the first whole genome shotgun (WGS) sequence.

Sanger, et alNucleotide Sequence of Bacteriophage Lambda,” J. Mol. Biol. 162 (1982) 729-73.

View Map + Bookmark Entry

Defining a General Framework for Studying Complex Biological Systems 1982

In 1982 Vision: A Computational Investigation into the Human Representation and Processing of Visual Information by the British neuroscientist David Marr, a professor at MIT, was published posthumously in New York. This work defined a general framework for studying complex biological systems.

"According to Marr, a complex biological system can be understood at three distinct levels. The first level ("computational level") describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain's identification of the objects present in the image we had observed. The second level ("algorithmic level") describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level ("implementation level") describes how our own biological hardware of cells implements the procedure described by the algorithmic level" (Yarden Katz, "Noam Chomsky on Where Artificial Intelligence Went Wrong," Atlantic Monthly, 11-1-2012).

View Map + Bookmark Entry

The First Book on Neuromorphic Computing 1984

In 1984 professor of electrical engineering and computer science at Caltech Carver Mead published Analog VLSI and Neural SystemsThis was first book on neuromorphic engineering or neuromorphic computing—a concept developed by Mead, that involves 

"... the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times the term neuromorphic has been used to describe analog, digital, and mixed-mode analog/digital VLSI and software systems that implement models of neural systems (for perceptionmotor control, or multisensory integration).

"A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change, " Wikipedia article on Neuromorphic engineering, accessed 01-01-2014.)

View Map + Bookmark Entry

The First Study of Ancient DNA (aDNA) November 15, 1984

On November 15, 1984 Russell Higuchi, Barbara Bowman, and Mary Freiberger from the Department of Biochemistry at the University of California, Berkeley and Oliver A. Ryder & Allan C. Wilson, of the Research Department, San Diego Zoo, published "DNA sequences from the quagga, an extinct member of the horse family," Nature 312, 282-284; doi:10.1038/312282a0.  This was probably the first study of DNA isolated from ancient specimens, or ancient DNA (aDNA).

"To determine whether DNA survives and can be recovered from the remains of extinct creatures, we have examined dried muscle from a museum specimen of the quagga, a zebra-like species (Equus quagga) that became extinct in 1883. We report that DNA was extracted from this tissue in amounts approaching 1% of that expected from fresh muscle, and that the DNA was of relatively low molecular weight. Among the many clones obtained from the quagga DNA, two containing pieces of mitochondrial DNA (mtDNA) were sequenced. These sequences, comprising 229 nucleotide pairs, differ by 12 base substitutions from the corresponding sequences of mtDNA from a mountain zebra, an extant member of the genus Equus. The number, nature and locations of the substitutions imply that there has been little or no postmortem modification of the quagga DNA sequences, and that the two species had a common ancestor 3–4 Myr ago, consistent with fossil evidence concerning the age of the genus Equus."

View Map + Bookmark Entry

Origins of the Human Genome Project December 1984 – April 1987

In 1985, as Director of the U.S. Department of Energy’s (DOE) Health and Environmental Research Programs, Charles DeLisi and his advisors proposed, planned and defended before the White House Office of Management and Budget and the Congress, the Human Genome Project. The proposal created a storm of controversy, but was included in President Ronald Reagan’s Fiscal Year 1987 budget submission to the Congress, and subsequently passed both the House and the Senate.

The beginning of the project may have occurred in a workshop known as the Alta Summit held in Alta, Utah, December 1984.

"Robert Sinsheimer, then Chancellor of the University of California, Santa Cruz (UCSC), thought about sequencing the human genome as the core of a fund-raising opportunity in late 1984. He and others convened a group of eminent scientists to discuss the idea in May 1985. This workshop planted the idea, although it did not succeed in attracting money for a genome research institute on the campus of UCSC. Without knowing about the Santa Cruz workshop, Renato Dulbecco of the Salk Institute conceived of sequencing the genome as a tool to understand the genetic origins of cancer. Dulbecco, a Nobel Prize winning molecular biologist, laid out his ideas on Columbus Day, 1985, and subsequently in other public lectures and in a commentary for Science. The commentary, published in March 1986, was the first widely public exposure of the idea and gave impetus to the idea's third independent origin, by then already gathering steam.

"Charles DeLisi, who did not initially know about either the Santa Cruz workshop or Dulbecco's public lectures, conceived of a concerted effort to sequence the human genome under the aegis of the Department of Energy (DOE). DeLisi had worked on mathematical biology at the National Cancer Institute, the largest component of the National Institutes of Health (NIH). How to interpret DNA sequences was one of the problems he had studied, working with the T-10 group at Los Alamos National Laboratory in New Mexico (a group of mathematicians and others interested in applying mathematics and computational techniques to biological questions). In 1985, DeLisi took the reins of DOE's Office of Health and Environmental Research, the program that supported most biology in the Department. The origins of DOE's biology program traced to the Manhattan Project, the World War II program that produced the first atomic bombs with its concern about how radiation caused genetic damage.

"In the fall of 1985, DeLisi was reading a draft government report on technologies to detect inherited mutations, a nagging problem in the study of children to those exposed to the Hiroshima and Nagasaki bombs, when he came up with the idea of a concerted program to sequence the human genome.9 DeLisi was positioned to translate his idea into money and staff. While his was the third public airing of the idea, it was DeLisi's conception and his station in government science administration that launched the genome project" (Robert Mullan Cook-Deegan, Origins of the Human Genome Project, accessed 05-24-2009).

In March 1986 the Department of Energy, Office of Health and Environmental Research, sponsored a workshop at Los Alamos. This was edited by M. Bitensky and published as Sequencing the Human Genome. Summary Report of the  Santa Fe Workshop, March 3-4, 1986

The initial report on the Human Genome Project appeared in April 1987 as:

Report on the Human Genome Initiative for the Office of Health and Environmental Research, Prepared by the Subcommittee on Human Genome of the Health and Environmental Research Advisory Committee for the U.S. Department of Energy Office of Energy Research Office of Health and Environmental Research.

View Map + Bookmark Entry

The First Semi-Automatic DNA Sequencer 1986

In 1986 Leroy Hood and Lloyd Smith from the California Institute of Technology developed the first semi-automatic DNA sequencer, working with a laser that recognized fluorescing DNA markers.

"A biologist at the California Institute of Technology and a founder of API [Applied Biosystems, Inc.], Hood improved the existing Sanger method of enzymatic sequencing, which was becoming the laboratory standard. In this method, DNA to be sequenced is cut apart, and a single strand serves as a template for the synthesis of complementary strands. The nucleotides used to build these strands are randomly mixed with a radioactively labeled and modified nucleotide that terminates the synthesis. Fragments of all different lengths result. The resulting array, sent through a separation gel, reveals the order of the bases. Transferred to film, an "autoradiograph" provides a readable sequence from raw data. This data could be transferred to a computer by a human reader.

"In automating the process, Hood modified both the chemistry and the data-gathering processes. In the sequencing reaction itself, he sought to replace the use of radioactive labels, which were unstable, posed a health hazard, and required separate gels for each of the four DNA bases.

" • In place of radioisotopes, Hood developed chemistry that used fluorescent dyes of different colors—one for each of the four DNA bases. This system of "color-coding" eliminated the need to run several reactions in overlapping gels.

"The fluorescent labels were also aspects of the larger system that revolutionized the end stage of the process—the way in which sequence data was gathered. Hood integrated laser and computer technology, eliminating the tedious process of information-gathering by hand.

" • As the fragments of DNA percolated through the gel, a laser beam stimulated the fluorescent labels, causing them to glow. The light they emitted was picked up by a lens and photomultiplier, and transmitted as digital information directly into a computer" (Genome News Network, Genetics and Genomics Timeline 1989, accessed 05-25-2009).

View Map + Bookmark Entry

The First Map of the Functioning Structure of an Entire Brain November 12, 1986

On November 12, 1986 J. G. White, E. Southgate, J. N. Thomson and S[idney] Brenner published "The Structure of the nervous System of the Nematode Caenorhabditis elegans," Philosophical Transactions B: Biological Sciences, 314 (1986) no. 1165, 1-340. The first map of the functioning structure of an entire brain at the cellular level, this paper has been called the beginning of connectomics.

"The structure and connectivity of the nervous system of the nematode Caenorhabditis elegans has been deduced from reconstructions of electron micrographs of serial sections. The hermaphrodite nervous system has a total complement of 302 neurons, which are arranged in an essentially invariant structure. Neurons with similar morphologies and connectivities have been grouped together into classes; there are 118 such classes. Neurons have simple morphologies with few, if any, branches. Processes from neurons run in defined positions within bundles of parallel processes, synaptic connections being made en passant. Process bundles are arranged longitudinally and circumferentially and are often adjacent to ridges of hypodermis. Neurons are generally highly locally connected, making synaptic connections with many of their neighbours. Muscle cells have arms that run out to process bundles containing motoneuron axons. Here they receive their synaptic input in defined regions along the surface of the bundles, where motoneuron axons reside. Most of the morphologically identifiable synaptic connections in a typical animal are described. These consist of about 5000 chemical synapses, 2000 neuromuscular junctions and 600 gap junctions" (Abstract).

View Map + Bookmark Entry

The First DNA Sequencing Machine 1987

In 1987 Applied Biosystems, Foster City, California, marketed the first commercial DNA sequencing machine, based on Leroy Hood’s technology.

View Map + Bookmark Entry

The First Analog Silicon Retina 1988

With his student Misha Mahowald, computer scientist Carver Mead at Caltech described the first analog silicon retina in "A Silicon Model of Early Visual Processing," Neural Networks 1 (1988) 91−97. The silicon retina used analog electrical circuits to mimic the biological functions of rod cellscone cells, and other non-photoreceptive cells in the retina of the eye. It was the first example of using continuously-operating floating gate (FG) programming/erasing techniques— in this case UV light— as the backbone of an adaptive circuit technology. The invention was not only potentially useful as a device for restoring sight to the blind, but it was also one of the most eclectic feats of electrical and biological engineering of the time.

"The approach to silicon models of certain neural computations expressed in this chip, and its successors, foreshadowed a totally new class of physically based computations inspired by the neural paradigm. More recent results demonstrated that a wide range of visual and auditory computations of enormous complexity can be carried out in minimal area and with minute energy dissipation compared with digital implementations" (http://www.cns.caltech.edu/people/faculty/mead/carver-contributions.pdf, accessed 12-23-2013).

In 1992 Mahowald received her Ph.D. under Mead at Caltech with her thesis, VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function. 

View Map + Bookmark Entry

The National Center for Biotechnology Information is Founded November 4, 1988

Recognizing the importance of computerized information processing methods for the conduct of biomedical research, on November 4, 1988 Senator and Representative Claude Pepper sponsored legislation that established the National Center for Biotechnology Information (NCBI) as a division of the National Library of Medicine (NLM), Bethesda, Maryland. NLM was chosen for its experience in creating and maintaining biomedical databases, and because as part of NIH, it could establish an intramural research program in computational molecular biology. 

View Map + Bookmark Entry

1990 – 2000

Expressed Sequence Tags 1991

In 1991 J. Craig Venter and colleagues at the National Institute of Health described a fast new approach to gene discovery using Expressed Sequence Tags (ESTs). Although controversial when first introduced, ESTs were soon widely employed both in public and private sector research. They proved economical and versatile, used not only for rapid identification of new genes, but also for analyzing gene expression, gene families, and possible disease-causing mutations.

View Map + Bookmark Entry

The Spread of Data-Driven Research From 1993 to 2013 1993 – 2013

On p. 16 of the printed edition of California Magazine 124, Winter 2013, there was an unsigned sidebar headlined "Data U." It contained a chart showing the spread of computing, or data-driven research, during the twenty years from 1993 to 2013, from a limited number of academic disciplines in 1993 to nearly every facet of university research.

According to the sidebar, in 1993 data-driven research was part of the following fields:

Artificial Intelligence: machine learning, natural language processing, vision, mathematical models of cognition and learning

Chemistry: chemical or biomolecular engineering

Computational Science: computational fluid mechanics, computational materials sciences

Earth and Planetary Science: climate modeling, seismology, geographic information systems

Marketing: online advertising, comsumer behavior

Physical Sciences: astronomy, particle physics, geophysics, space sciences

Signal Processing: compressed sensing, inverse imagining

Statistics

By the end of 2013 data-driven research was pervasive not only in the fields listed above, but also in the following fields:

Biology: genomics, proteomics, econinformatics, computational cell biology

Economics: macroeconomic policy, taxation, labor economics, microeconomics, finance, real estate

Engineering: sensor networks (traffic control, energy-efficient buildings, brain-machine interface)

Environomental Sciences: deforestation, climate change, impacts of pollution

Humanities: digital humanities, archaeology, land use, cultural geography, cultural heritage

Law: privacy, security, forensics, drug/human/CBRNe trafficking, criminal justice, incarceration, judicial decision making, corporate law

Linguistics: historical linguistics, corpus linguistics, psycholinguistics, language and cognition

Media: social media, mobile apps, human behavior

Medicine and Public Health: imaging, medical records, epidemiology, environmental conditions, health

Neuroscience: fMRI, multi-electrode recordings, theoretical neuroscience

Politcal Science & Public Policy: voter turn-out, elections, political behavior social welfare, poverty, youth policy, educational outcomes

Psychology: social psychology

Sociology & Demography: social change, stratification, social networks, population health, aging immigration, family

Urban Planning: transportation studies, urban environments

View Map + Bookmark Entry

Venter Founds Celera Genomics May 1998

In May 1998 Craig Venter founded Celera Genomics, with Applera Corporation (Applied Biosystems) in Rockville, Maryland, to sequence and assemble the human genome.

View Map + Bookmark Entry

IBM's Blue Gene Project Begins December 1999

In December 1999 IBM announced the start of a five-year effort to build a massively parallel computer, Blue Gene, the study of bio-molecular phenomena such as protein folding. When the project began Blue Gene was 500 times more powerful than the world’s fastest computers. 

View Map + Bookmark Entry

2000 – 2005

A Model of Cortical Processing as an Electronic Circuit of 16 "Neurons" that Could Select and Amplify Input Signals Much Like the Cortex of the Mammalian Brain 2000

In 2000 a research team from the Institute of Neuroinformatics ETHZ/UNI Zurich; Bell Laboratories, Murray Hill, NJ; and the Department of Brain and Cognitive Sciences & Department of Electrical Engineering and Computer Science at MIT created an electrical circuit of 16 "neurons" that could select and amplify input signals much like the cortex of the mammalian brain.

"Digital circuits such as the flip-flop use feedback to achieve multi-stability and nonlinearity tor estore signals to logical levels, for example 0 and 1. Analogue feedback circuits are generally designed to operate linearly, so that signals are over a range, and the response is unique. By contrast, the response of cortical circuits to sensory stimulation can be both multistable and graded. We propose that the neocortex combines digital selection of an active set of neurons with analogue response by dynamically varying the postive feedback inherent in its recurrent connections. Strong postive feedback causes differential instabilities that drive the selection of a set of active neurons under the constraints embedded in the synaptic weights. Once selected, the active neurons generate weaker, stable feedback that provides analogue amplication of the input. Here we present our model of cortical processing as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex" (Abstract)

R. Hahnloser, R. Sarpeshkar, M. Mahowald, R.J. Douglas and S. Seung: "Digital selection and analog amplification co-exist in an electronic circuit inspired by neocortex", Nature 405 (2000) 947-951. 

View Map + Bookmark Entry

The Most Extensive Computation Undertaken in Biology to Date June 26, 2000

The Celera logo

The Human Genome Project logo

On June 26, 2000 "Celera Genomics [Rockville, Maryland] announced the first complete assembly of the human genome. Using whole genome shotgun sequencing, Celera began sequencing in September 1999 and finished in December. Assembly of the 3.12 billion base pairs of DNA, over the next six months, required some 500 million trillion sequence comparisons, and represented the most extensive computation ever undertaken in biology.

The Human Genome Project reported it had finished a “working draft” of the genome, stating that the project had fully sequenced 85 percent of the genome. Five major institutions in the United States and Great Britain performed the bulk of sequencing, together with contributions from institutes in China, France, and Germany” (Genome News Network, Genetics and Genomics Timeline 2000, accessed 05-24-2009).

View Map + Bookmark Entry

IBM Forms a Life Sciences Division August 2000

Reflective of rapid advancements in computational biology and genomics, in August 2000 IBM formed a Life Sciences Solutions division, incorporating its Computational Biology Center.

View Map + Bookmark Entry

Publication of the Human Genome Sequence February 15 – February 16, 2001

Sequencing machine screen shot

"Seven months after the ceremony at the White House marking the completion of the human genome sequence, highlights from two draft sequences and analyses of the data were published in Science and Nature. Scientists at Celera Genomics and the publicly funded Human Genome Project independently found that humans have approximately 30,000 genes that carry within them the instructions for making the body's diverse collection of proteins.

"The findings cast new doubt on the old paradigm that one gene makes one protein. Rather, it appears that one gene can direct the synthesis of many proteins through mechanisms that include 'alternative splicing.' "It seems to be a matter of five or six proteins, on average, from one gene," said Victor A. McKusick of the Johns Hopkins University School of Medicine, who was a co-author of the Science paper.

"The finding that one gene makes many proteins suggests that biomedical research in the future will rely heavily on an integration of genomics and proteomics, the word coined to describe the study of proteins and their biological interactions. Proteins are markers of the early onset of disease, and are vital to prognosis and treatment; most drugs and other therapeutic agents target proteins. A detailed understanding of proteins and the genes from which they come is the next frontier.

"One of the questions raised by the sequencing of the human genome is this: Whose genome is it anyway? The answer turns out to be that it doesn't really matter. As scientists have long suspected, human beings are all very much alike when it comes to our genes. The paper in Science reported that the DNA of human beings is 99.9 percent alike—a powerful statement about the relatedness of all humankind" (Genome News Network, Genetics and Genomics Timeline 2001, accessed 05-24-2009)

References:

Venter, J.C. et al. "The sequence of the human genome," Science 291, 1304-1351 (February 16, 2001).

Lander, E.S. et al. The Genome International Sequencing Consortium. "Initial sequencing and analysis of the human genome," Nature 409, 860-921 (February 15, 2001).

"An initial rough draft of the human genome was available in June 2000 and by February 2001 a working draft had been completed and published followed by the final sequencing mapping of the human genome on April 14, 2003. Although this was reported to be 99% of the human genome with 99.99% accuracy a major quality assessment of the human genome sequence was published in May 27, 2004 indicating over 92% of sampling exceeded 99.99% accuracy which is within the intended goal. Further analyses and papers on the HGP continue to occur. An initial rough draft of the human genome was available in June 2000 and by February 2001 a working draft had been completed and published followed by the final sequencing mapping of the human genome on April 14, 2003. Although this was reported to be 99% of the human genome with 99.99% accuracy a major quality assessment of the human genome sequence was published in May 27, 2004 indicating over 92% of sampling exceeded 99.99% accuracy which is within the intended goal. Further analyses and papers on the HGP continue to occur" (Wikipedia article on Human Genome Project, accessed 01-09-2013).

View Map + Bookmark Entry

2005 – 2010

Attempting to Use an Ink-Jet Printer to Print Living Tissue. . . . 2005

The National Science Foundation logo

Gabor Forgacs

University of Missouri seal

A diagram of organ printing

In 2005 The National Science Foundation funded research headed by Gabor Forgacs at the University of Missouri-Columbia on what was called "Organ Printing," to "further advance our understanding of self-assembly during the organization of cells and tissues into functional organ modules."

From ABC News 2-10-2006:

"In what could be the first step toward human immortality, scientists say they've found a way to do all of these things and more with the use of a technology found in many American homes: an ink-jet printer.

"Researchers around the world say that by using the technology, they can actually 'print' living human tissue and one day will be able to print entire organs.

" 'The promise of tissue engineering and the promise of 'organ printing' is very clear: We want to print living, three-dimensional human organs,' Dr. Vladimir Mironov said. 'That's our goal, and that's our mission.' "

"Though the field is young, it already has a multitude of names.

" 'Some people call this 'bio-printing.' Some people call this 'organ printing.' Some people call this 'computer-aided tissue engineering.' Some people call this 'bio-manufacturing,' said Mironov, associate professor at the Medical University of South Carolina and one of the leading researchers in the field."

View Map + Bookmark Entry

The Genetic Code of Avian Flu Virus H5N1 is Deciphered October 5, 2005

The Armed Forces Institute of Pathology logo

Colorized transmission electron micrograph of Avian influenza A H5N1 viruses (seen in gold) grown in MDCK cells (seen in green)

On October 5,2005 scientists at the Armed Forces Institute of Pathology announced that they deciphered the genetic code of the 1918 avian flu virus H5N1, which killed as many as 50,000,000 people worldwide, from a victim exhumed in 1997 from the Alaskan permafrost. The scientists reconstructed the virus in the laboratory and published the genetic sequence.

View Map + Bookmark Entry

Using Currency Movements to Predict the Spread of Infectious Disease January 26, 2006

Dirk Brockman

A bill that has can be tracked on Where's George?

On January 26, 2006 Dirk Brockmann, a theoretical physicist and computational epidemiologist at Northwestern University in Evanston, Illinois, L. Hufnagel, and T. Geisel published "The scaling laws of human travel," Nature 439 (2006) 462-65. 

Using statistical data from the American currency tracking website, Where's George?, the paper described statistical laws of human travel in the United States, and developed a mathematical model of the spread of infectious disease.

[By January 31, 2009, Where's George? tracked over 149 million bills totaling more than $810 million. (Wikipedia).]

View Map + Bookmark Entry

Molecular Animation July 30 – August 3, 2006

At Siggraph2006, held in Boston, Massachusetts from July 30 to August 3, 2006, BioVisions, a scientific visualization program at Harvard’s Department of Molecular and Cellular Biology, and Xvivo, a Connecticut-based scientific animation company, introduced the three-minute molecular animation video, The Inner Life of the Cell.

The film depicted marauding white blood cells attacking infections in the body. 

 

 

 

View Map + Bookmark Entry

Data-Storing Bacteria Could Last Thousands of Years February 27, 2007

The Keio University crest

Bacillius Subtilis, the bacteria on which the data was stored

A technology developed at Keio University, Tokyo, Japan, and announced on February 27, 2007, carried with it the possibility that bacterial DNA could be used as a medium for storing digital information long-term—potentially thousands of years.

"Keio University Institute for Advanced Biosciences and Keio University Shonan Fujisawa Campus announced the development of the new technology, which creates an artificial DNA that carries up to more than 100 bits of data within the genome sequence, according to the JCN Newswire. The universities said they successfully encoded "e= mc2 1905!" -- Einstein's theory of relativity and the year he enunciated it -- on the common soil bacteria,  Bacillius subtilis."

View Map + Bookmark Entry

Watson's Genome is Sequenced May 31, 2007

James D. Watson

An example of DNA sequencing

On May 31, 2007 the genome of James D. Watson, co-discoverer of the double-helical structure of DNA, was sequenced and presented to Watson. It was the second individual human genome to be sequenced; the first was that of J. Craig Venter, which was sequenced in the Human Genome Project, the first working draft of which was completed and published in February 2001

View Map + Bookmark Entry

Discovery of a Set of Mutations that Might Have Caused a Cancer November 6, 2008

On November 6, 2008 Timothy J. Ley and numerous collaborators from different countries published in the journal Nature, "DNA sequencing of a cytogenetically normal acute myeloid luekaemia genome". This was first time that researchers decoded all the genes of a person with cancer and found a set of mutations that might have caused the disease or aided its progression. The New York Times online reported:

"Using cells donated by a woman in her 50s who died of leukemia, the scientists sequenced all the DNA from her cancer cells and compared it to the DNA from her own normal, healthy skin cells. Then they zeroed in on 10 mutations that occurred only in the cancer cells, apparently spurring abnormal growth, preventing the cells from suppressing that growth and enabling them to fight off chemotherapy.

"The findings will not help patients immediately, but researchers say they could lead to new therapies and would almost certainly help doctors make better choices among existing treatments, based on a more detailed genetic picture of each patient’s cancer. Though the research involved leukemia, the same techniques can also be used to study other cancers."

View Map + Bookmark Entry

Analysis of Web Search Queries Track the Spread of Flu Faster than Traditional Surveillance Methods November 11, 2008

On November 11, 2008 Google.org unveiled Google Flu Trends, using aggregated Google search data to estimate flu activity up to two weeks faster than traditional flu surveillance systems.

"Each week, millions of users around the world search for online health information. As you might expect, there are more flu-related searches during flu season, more allergy-related searches during allergy season, and more sunburn-related searches during the summer. You can explore all of these phenomena using Google Trends. But can search query trends provide an accurate, reliable model of real-world phenomena?

"We have found a close relationship between how many people search for flu-related topics and how many people actually have flu symptoms. Of course, not every person who searches for "flu" is actually sick, but a pattern emerges when all the flu-related search queries from each state and region are added together. We compared our query counts with data from a surveillance system managed by the U.S. Centers for Disease Control and Prevention (CDC) and discovered that some search queries tend to be popular exactly when flu season is happening. By counting how often we see these search queries, we can estimate how much flu is circulating in various regions of the United States.

"During the 2007-2008 flu season, an early version of Google Flu Trends was used to share results each week with the Epidemiology and Prevention Branch of the Influenza Division at CDC. Across each of the nine surveillance regions of the United States, we were able to accurately estimate current flu levels one to two weeks faster than published CDC reports" (Google Flu Trends website).

View Map + Bookmark Entry

The First iPhone and iPad Apps for the Visually Impaired 2009 – 2010

Because of the convenience of carrying smart phones it was probably inevitable that their features would be applied to support the visually impaired. iBlink Radio introduced in July 2010 by Serotek Corporation of Minneapolis, Minnesota, calls itself the first iOS application for the visually impaired. It provides access to radio stations, podcasts and reading services of special interest to blind and visually impaired persons, as well as their friends, family, caregivers and those wanting to know what life is like without eyesight.

SayText, also introduced in 2010 by Haave, Inc. of Vantaa, Finland, reads out loud text that is photographed by a cell phone camera.

VisionHunt, by VI Scientific of Nicosia, Cyprus, introduced in 2009, is a vision aid tool for the blind and the visually impaired that uses the phone’s camera to detect colors, paper money and light sources. VisionHunt identifies about 30 colors. It also detects 1, 5, 10, 20, 50 US Dollar bills. Finally, VisionHunt detects sources of light, such as switched-on lamps or televisions. VisionHunt is fully accessible to the blind and the visually impaired through Voice Over or Zoom.

Numerous other apps for the visually impaired were introduced after the above three.

View Map + Bookmark Entry

Robot Scientist becomes the First Machine to Discover New Scientific Knowledge April 3, 2009

Ross D. King, Jem Rowland and 11 co-authors from the Department of Computer Science at Aberystwyth University, Aberystwyth, Wales, and the University of Cambridge, published "The Automation of Science," Science 3 April 2009: Vol. 324. no. 5923, pp. 85 - 89 DOI: 10.1126/science.1165620. In this paper they described a Robot Scientist which the researchers believed was the first machine to have independently discovered new scientific knowledge. The robot, called Adam, was a computer system that fully automated the scientific process. 

"Prof Ross King, who led the research at Aberystwyth University, said: 'Ultimately we hope to have teams of human and robot scientists working together in laboratories'. The scientists at Aberystwyth University and the University of Cambridge designed Adam to carry out each stage of the scientific process automatically without the need for further human intervention. The robot has discovered simple but new scientific knowledge about the genomics of the baker's yeast Saccharomyces cerevisiae, an organism that scientists use to model more complex life systems. The researchers have used separate manual experiments to confirm that Adam's hypotheses were both novel and correct" (http://www.eurekalert.org/pub_releases/2009-04/babs-rsb032709.php).

"The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist "Adam," which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam's conclusions through manual experiments. To describe Adam's research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge" (Abstract in Science).

View Map + Bookmark Entry

Using Air Traffic and Currency Tracking Data in Epidemiology May 3, 2009

In May 2009 Dirk Brockmann, and the epidemic modeling team at the Northwestern Institute on Complex Systems, used air traffic and commuter traffic patterns for the entire country, and data from the American currency tracking website, Where’s George?, to predict the spread of the H1N1 flu or "swine flu" across the United States.

View Map + Bookmark Entry

The Cost of DeCoding a Human Genome Drops to $50,000 August 10, 2009

In August 2009 it was announced that bioengineer Stephen R. Quake of Stanford University invented a new technology for decoding DNA that could sequence a human genome at a cost of $50,000. 

"Dr. Quake’s machine, the Heliscope Single Molecule Sequencer, can decode or sequence a human genome in four weeks with a staff of three people. The machine is made by a company he founded, Helicos Biosciences, and costs 'about $1 million, depending on how hard you bargain,' he said.

"Only seven human genomes have been fully sequenced. They are those of J. Craig Venter, a pioneer of DNA decoding; James D. Watson, the co-discoverer of the DNA double helix; two Koreans; a Chinese; a Yoruban; and a leukemia victim. Dr. Quake’s seems to be the eighth full genome, not counting the mosaic of individuals whose genomes were deciphered in the Human Genome Project."

"For many years DNA was sequenced by a method that was developed by Frederick Sanger in 1975 and used to sequence the first human genome in 2003, at a probable cost of at least $500 million. A handful of next-generation sequencing technologies are now being developed and constantly improved each year. Dr. Quake’s technology is a new entry in that horse race.

"Dr. Quake calculates that the most recently sequenced human genome cost $250,000 to decode, and that his machine brings the cost to less than a fifth of that.

“ 'There are four commercial technologies, nothing is static and all the platforms are improving by a factor of two each year,' he said. 'We are about to see the floodgates opened and many human genomes sequenced.'

"He said the much-discussed goal of the $1,000 genome could be attained in two or three years. That is the cost, experts have long predicted, at which genome sequencing could start to become a routine part of medical practice" (Nicholas Wade, NY Times, http://www.nytimes.com/2009/08/11/science, /11gene.html?8dpc).

View Map + Bookmark Entry

2010 – 2012

"The Data-Driven Life" April 20, 2010

On April 20,, 2010 writer Gary Wolf published "The Data-Driven Life" in The New York Times Magazine:

". . . . Another person I’m friendly with, Mark Carranza — he also makes his living with computers — has been keeping a detailed, searchable archive of all the ideas he has had since he was 21. That was in 1984. I realize that this seems impossible. But I have seen his archive, with its million plus entries, and observed him using it. He navigates smoothly between an interaction with somebody in the present moment and his digital record, bringing in associations to conversations that took place years earlier. Most thoughts are tagged with date, time and location. What for other people is an inchoate flow of mental life is broken up into elements and cross-referenced.  

"These men all know that their behavior is abnormal. They are outliers. Geeks. But why does what they are doing seem so strange? In other contexts, it is normal to seek data. A fetish for numbers is the defining trait of the modern manager. Corporate executives facing down hostile shareholders load their pockets full of numbers. So do politicians on the hustings, doctors counseling patients and fans abusing their local sports franchise on talk radio. Charles Dickens was already making fun of this obsession in 1854, with his sketch of the fact-mad schoolmaster Gradgrind, who blasted his students with memorized trivia. But Dickens’s great caricature only proved the durability of the type. For another century and a half, it got worse.

"Or, by another standard, you could say it got better. We tolerate the pathologies of quantification — a dry, abstract, mechanical type of knowledge — because the results are so powerful. Numbering things allows tests, comparisons, experiments. Numbers make problems less resonant emotionally but more tractable intellectually. In science, in business and in the more reasonable sectors of government, numbers have won fair and square. For a long time, only one area of human activity appeared to be immune. In the cozy confines of personal life, we rarely used the power of numbers. The techniques of analysis that had proved so effective were left behind at the office at the end of the day and picked up again the next morning. The imposition, on oneself or one’s family, of a regime of objective record keeping seemed ridiculous. A journal was respectable. A spreadsheet was creepy.  

"And yet, almost imperceptibly, numbers are infiltrating the last redoubts of the personal. Sleep, exercise, sex, food, mood, location, alertness, productivity, even spiritual well-being are being tracked and measured, shared and displayed. On MedHelp, one of the largest Internet forums for health information, more than 30,000 new personal tracking projects are started by users every month. Foursquare, a geo-tracking application with about one million users, keeps a running tally of how many times players “check in” at every locale, automatically building a detailed diary of movements and habits; many users publish these data widely. Nintendo’s Wii Fit, a device that allows players to stand on a platform, play physical games, measure their body weight and compare their stats, has sold more than 28 million units.  

"Two years ago, as I noticed that the daily habits of millions of people were starting to edge uncannily close to the experiments of the most extreme experimenters, I started a Web site called the Quantified Self with my colleague Kevin Kelly. We began holding regular meetings for people running interesting personal data projects. I had recently written a long article about a trend among Silicon Valley types who time their days in increments as small as two minutes, and I suspected that the self-tracking explosion was simply the logical outcome of this obsession with efficiency. We use numbers when we want to tune up a car, analyze a chemical reaction, predict the outcome of an election. We use numbers to optimize an assembly line. Why not use numbers on ourselves?  

"But I soon realized that an emphasis on efficiency missed something important. Efficiency implies rapid progress toward a known goal. For many self-trackers, the goal is unknown. Although they may take up tracking with a specific question in mind, they continue because they believe their numbers hold secrets that they can’t afford to ignore, including answers to questions they have not yet thought to ask.

"Ubiquitous self-tracking is a dream of engineers. For all their expertise at figuring out how things work, technical people are often painfully aware how much of human behavior is a mystery. People do things for unfathomable reasons. They are opaque even to themselves. A hundred years ago, a bold researcher fascinated by the riddle of human personality might have grabbed onto new psychoanalytic concepts like repression and the unconscious. These ideas were invented by people who loved language. Even as therapeutic concepts of the self spread widely in simplified, easily accessible form, they retained something of the prolix, literary humanism of their inventors. From the languor of the analyst’s couch to the chatty inquisitiveness of a self-help questionnaire, the dominant forms of self-exploration assume that the road to knowledge lies through words. Trackers are exploring an alternate route. Instead of interrogating their inner worlds through talking and writing, they are using numbers. They are constructing a quantified self.  

"UNTIL A FEW YEARS ago it would have been pointless to seek self-knowledge through numbers. Although sociologists could survey us in aggregate, and laboratory psychologists could do clever experiments with volunteer subjects, the real way we ate, played, talked and loved left only the faintest measurable trace. Our only method of tracking ourselves was to notice what we were doing and write it down. But even this written record couldn’t be analyzed objectively without laborious processing and analysis.  "Then four things changed. First, electronic sensors got smaller and better. Second, people started carrying powerful computing devices, typically disguised as mobile phones. Third, social media made it seem normal to share everything. And fourth, we began to get an inkling of the rise of a global superintelligence known as the cloud.

"Millions of us track ourselves all the time. We step on a scale and record our weight. We balance a checkbook. We count calories. But when the familiar pen-and-paper methods of self-analysis are enhanced by sensors that monitor our behavior automatically, the process of self-tracking becomes both more alluring and more meaningful. Automated sensors do more than give us facts; they also remind us that our ordinary behavior contains obscure quantitative signals that can be used to inform our behavior, once we learn to read them."

". . . . Adler’s idea that we can — and should — defend ourselves against the imposed generalities of official knowledge is typical of pioneering self-trackers, and it shows how closely the dream of a quantified self resembles therapeutic ideas of self-actualization, even as its methods are startlingly different. Trackers focused on their health want to ensure that their medical practitioners don’t miss the particulars of their condition; trackers who record their mental states are often trying to find their own way to personal fulfillment amid the seductions of marketing and the errors of common opinion; fitness trackers are trying to tune their training regimes to their own body types and competitive goals, but they are also looking to understand their strengths and weaknesses, to uncover potential they didn’t know they had. Self-tracking, in this way, is not really a tool of optimization but of discovery, and if tracking regimes that we would once have thought bizarre are becoming normal, one of the most interesting effects may be to make us re-evaluate what “normal” means" (http://www.nytimes.com/2010/05/02/magazine/02self-measurement-t.html?pagewanted=7&ref=magazine, accessed 05-07-2010).

View Map + Bookmark Entry

The First MRI Video of Childbirth November 2010 – June 2012

In November 2010 the first video of a woman giving birth in an open MRI machine was taken at the Charité Hospital in Berlin, Germany.  The team led by Christian Bamberg, M.D. first published the results as "Human birth observed in real-time open magnetic resonance imaging," in the American Journal of Obstetrics & Gynecology in January 2012.  Supplementary material, including the video of the final 45 minutes of labor, was published  as Vol. 206, issue, pp. 505.e1-505e6, June 2012.

View Map + Bookmark Entry

Construction of the Francis Crick Institute Begins July 2011

In July 2011 construction began for the The Francis Crick Institute (formerly the UK Centre for Medical Research and Innovation), a biomedical research center in London. The Institute is a partnership between Cancer Research UK, Imperial College London, King's College London, the Medical Research Council, University College London (UCL) and the Wellcome Trust. It will be the largest center for biomedical research and innovation in Europe.

The Francis Crick Institute, named after British molecular biologist, biophysicist, and neuroscientist Francis Crick, will be located in a new state-of-the-art 79,000 square meters facility next to St Pancras railway station in the Camden area of Central London. It is expected that researchers will to be able to start work in 2015. Complete cost of the facility is budgeted at approximately £600 million. The institute is expected to employ 1500 people, including 1,250 scientists, with an annual budget of over £100 million. 

View Map + Bookmark Entry

Toward Cognitive Computing Systems August 18, 2011

On August 18, 2011 "IBM researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. The technology could yield many orders of magnitude less power consumption and space than used in today’s computers. 

"In a sharp departure from traditional concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry. Its first two prototype chips have already been fabricated and are currently undergoing testing.  

"Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember – and learn from – the outcomes, mimicking the brains structural and synaptic plasticity.  

"To do this, IBM is combining principles from nanoscience, neuroscience and supercomputing as part of a multi-year cognitive computing initiative. The company and its university collaborators also announced they have been awarded approximately $21 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

"The goal of SyNAPSE is to create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment – all while rivaling the brain’s compact size and low power usage. The IBM team has already successfully completed Phases 0 and 1.  

" 'This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century,' said Dharmendra Modha, project leader for IBM Research. 'Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture. These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government.' " (http://www-03.ibm.com/press/us/en/pressrelease/35251.wss, accessed 08-21-2011).

View Map + Bookmark Entry

The First Commercial Application of the IBM Watson Question Answering System: Medical Diagnostics September 12, 2011

Health Care insurance provider WellPoint, Inc. and IBM announced an agreement to create the first commercial applications of the IBM Watson question answering system. Under the agreement, WellPoint would develop and launch Watson-based solutions to help improve patient care through the delivery of up-to-date, evidence-based health care for millions of Americans, while IBM would develop the Watson healthcare technology on which WellPoint's solution will run.

View Map + Bookmark Entry

A Silicon Chip that Mimics How the Brain's Synapses Change in Response to New Information November 2011

In November 2011, a group of MIT researchers created the first computer chip that mimicked how the brain's neurons adapt in response to new information. This biological phenomenon, known as plasticity, is analog, ion-based communication in a synapse between two neurons. With about 400 transistors, the silicon chip can simulated the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. 

"There are about 100 billion neurons in the brain, each of which forms synapses with many other neurons. A synapse is the gap between two neurons (known as the presynaptic and postsynaptic neurons). The presynaptic neuron releases neurotransmitters, such as glutamate and GABA, which bind to receptors on the postsynaptic cell membrane, activating ion channels. Opening and closing those channels changes the cell’s electrical potential. If the potential changes dramatically enough, the cell fires an electrical impulse called an action potential.

"All of this synaptic activity depends on the ion channels, which control the flow of charged atoms such as sodium, potassium and calcium. Those channels are also key to two processes known as long-term potentiation (LTP) and long-term depression (LTD), which strengthen and weaken synapses, respectively. "

"The MIT researchers designed their computer chip so that the transistors could mimic the activity of different ion channels. While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell. 

“ 'We can tweak the parameters of the circuit to match specific ion channels,” Poon says. 'We now have a way to capture each and every ionic process that’s going on in a neuron.'

"Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says" (http://www.mit.edu/newsoffice/2011/brain-chip-1115.html, accessed 01-01-2014).

Rachmuth, G., Shouvai, H., Bear, M., Poon, C. "A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity," Proceedings of the National Academy of Sciences 108, no. 459, December 6, 2011, E1266-E1274, doi: 10.1073/pnas.1106161108

View Map + Bookmark Entry

The Cost of Sequencing a Human Genome Drops to $10,500 November 30, 2011

"The cost of sequencing a human genome — all three billion bases of DNA in a set of human chromosomes — plunged to $10,500 last July from $8.9 million in July 2007, according to the National Human Genome Research Institute.  

"That is a decline by a factor of more than 800 over four years. By contrast, computing costs would have dropped by perhaps a factor of four in that time span.  

"The lower cost, along with increasing speed, has led to a huge increase in how much sequencing data is being produced. World capacity is now 13 quadrillion DNA bases a year, an amount that would fill a stack of DVDs two miles high, according to Michael Schatz, assistant professor of quantitative biology at the Cold Spring Harbor Laboratory on Long Island.

"There will probably be 30,000 human genomes sequenced by the end of this year, up from a handful a few years ago, according to the journal Nature. And that number will rise to millions in a few years" (http://www.nytimes.com/2011/12/01/business/dna-sequencing-caught-in-deluge-of-data.html?_r=1&hp, accessed 12-02-2011).

View Map + Bookmark Entry

IBM's Watson Question Answering System to Team with Cedars-Sinai Oschin Comprehensive Cancer Institute December 16, 2011

Health Insurance provider WellPoint announced that the Cedars-Sinai Samuel Oschin Comprehensive Cancer Institute in Los Angeles would provide clinical expertise to help shape WellPoint's new health care solutions utilizing IBM's Watson question answering system.

"It is estimated that new clinical research and medical information doubles every five years, and nowhere is this knowledge advancing more quickly than in the complex area of cancer care.  

"WellPoint believes oncology is one of the medical fields that could greatly benefit from this technology, given IBM Watson's ability to respond to inquiries posed in natural language and to learn from the responses it generates. The WellPoint health care solutions will draw from vast libraries of information including medical evidence-based scientific and health care data, and clinical insights from institutions like Cedars-Sinai. The goal is to assist physicians in evaluating evidence-based treatment options that can be delivered to the physician in a matter of seconds for assessment. WellPoint and Cedars-Sinai envision that this valuable enhancement to the decision-making process could empower physician-patient discussions about the best and most effective courses of treatment and improve the overall quality of patient care.  

"Cedars-Sinai was selected as WellPoint's partner based on its reputation as one of the nation's premier cancer institutions and its proven results in the diagnosis and treatment of complex cancers. Cedars-Sinai has experience and demonstrated success in working with technology innovators and shares WellPoint's commitment to improving the quality, efficiency and effectiveness of health care through innovation and technology.  

"Cedars-Sinai's oncology experts will help develop recommendations on appropriate clinical content for the WellPoint health care solutions. They will also assist in the evaluation and testing of the specific tools that WellPoint plans to develop for the oncology field utilizing IBM's Watson technology. The Cedars-Sinai cancer experts will enter hypothetical patient scenarios, evaluate the proposed treatment options generated by IBM Watson, and provide guidance on how to improve the content and utility of the treatment options provided to the physicians.  

"Leading Cedars-Sinai's efforts is M. William Audeh, M.D., medical director of its Samuel Oschin Comprehensive Cancer Institute. Dr. Audeh will work closely with WellPoint's clinical experts to provide advice on how the solutions may be best utilized in clinical practice to support increased understanding of the evolving body of knowledge in cancer, including emerging therapies not widely known by community physicians. As the solutions are developed, Dr. Audeh will also provide guidance on how the make the WellPoint offering useful and practical for physicians and patients.

" 'As we design the WellPoint systems that leverage IBM Watson's capabilities, it is essential that we incorporate the highly-specialized knowledge and real-life practice experiences of the nation's premier clinical experts,' said Harlan Levine, MD, executive vice president of WellPoint's Comprehensive Health Solutions. 'The contributions from Dr. Audeh, coupled with the expertise throughout Cedars-Sinai's Samuel Oschin Comprehensive Cancer Institute, will be invaluable to implementing this WellPoint offering and could ultimately benefit millions of Americans across the country.'

"WellPoint anticipates deploying their first offering next year, working with select physician groups in clinical pilots" (http://ir.wellpoint.com/phoenix.zhtml?c=130104&p=irol-newsArticle&ID=1640553&highlight=, accessed 12-17-2011).

View Map + Bookmark Entry

2012 – 2016

The Cost of Sequencing a Human Genome Drops to $1000 January 10, 2012

On January 10, 2012 Jonathan M. Rothberg, CEO of Guilford, Connecticut-based biotech company Ion Torrent, announced a new tabletop sequencer called the Ion Proton. The company introduced the device at the Consumer Electronics Show in Las Vegas on January 10. At $149,000, the new machine was about three times the price of the Personal Genome Machine, the sequencer that the company debuted about a year ago. But the DNA-reading chip inside it was 1,000 times more powerful, according to Rothberg, allowing the device to sequence an entire human genome in a day for $1,000—a price the biotech industry has been working toward for years because it would bring the cost down to the level of a medical test.

'The technology got better faster than we ever imagined,'Rothberg says. 'We made a lot of progress on the chemistry and software, then developed a new series of chips from a new foundry.' The result is a technology progression that has moved faster than Moore's law, which predicts that microchips will double in power roughly every two years.

"Ion Torrent's semiconductor-based approach for sequencing DNA is unique. Currently, optics-based sequencers, primarily from Illumina, a San Diego-based company, dominate the human genomics field. But, while the optics-based sequencers are generally considered more accurate, these machines cost upwards of $500,000, putting them out of reach for most clinicians. Meanwhile, at Ion Torrent's price, "you can imagine one in every doctor's office," says Richard Gibbs, director of Baylor College of Medicine's human genome sequencing center in Houston, which will be among the first research centers to receive a Proton sequencer.  

"The new Ion Torrent sequencer will also allow researchers to buy a chip that sequences only exons, the regions of the genome that encode proteins. Exons only account for about 5 percent of the human genome, according to the National Human Genome Research Institute, but they are where most disease-causing mutations occur, making so-called exome sequencing a faster and potentially cheaper option for many researchers. Although it's the same price as the genome chip, the Ion Torrent exome chip can sequence two exomes at a time, bringing the per-sequence cost down to $500.  

" 'Some researchers want to sequence single genes, others want to do exomes, and others—for example, cancer researchers—will want to sequence whole genomes, so all three are going to coexist,' says Rothberg. 'It's about finding the right tool for the problem.'  

"Whether Ion Torrent's new technology will be enough to make it the dominant supplier of these tools remains to be seen. A day after the company debuted the Proton sequencer, Illumina also announced that it, too, had reached the $1,000 genome milestone" (http://www.technologyreview.com/biomedicine/39458/?nlid=nldly&nld=2012-01-13, accessed 01-13-2013).

View Map + Bookmark Entry

The First Book Stored in DNA and then Read August 16, 2012

American molecular geneticist George M. Church, director of the U.S. Department of Energy Center on Bioenergy at Harvard & MIT, and director of the National Institutes of Health (NHGRI) Center of Excellence in Genomic Science at Harvard,  Yuan Gao from the Wyss Institute for Biologically Inspired Engineering, and Sriram Kosuri from the Department of Biomedical Engineering, Johns Hopkins University, encoded an entire book into the genetic molecules of DNA, the basic building blocks of life, and then accurately read back the text. Church's book, Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves, stored in a laboratory tube, contained 53,426 words, 11 illustrations and a JavaScript program, all of which totalled 5.27 megabits of data. Written with Ed Regis, it was scheduled to be published in printed and electronic editions in October 2012. Church's book was 600 times larger than the largest data set previously encoded in DNA.

"Digital data is traditionally stored as binary code: ones and zeros. Although DNA offers the ability to use four "numbers": A, C, G and T, to minimise errors Church's team decided to stick with binary encoding, with A and C both indicating zero, and G and T representing one.  

"The sequence of the artificial DNA was built up letter by letter using existing methods with the string of As, Cs, Ts and Gs coding for the letters of the book.  

"The team developed a system in which an inkjet printer embeds short fragments of that artificially synthesised DNA onto a glass chip. Each DNA fragment also contains a digital address code that denotes its location within the original file.  

"The fragments on the chip can later be "read" using standard techniques of the sort used to decipher the sequence of ancient DNA found in archeological material. A computer can then reassemble the original file in the right order using the address codes.  

"The book – an HTML draft of a volume co-authored by the team leader – was written to the DNA with images embedded to demonstrate the storage medium's versatility.  

"DNA is such a dense storage system because it is three-dimensional. Other advanced storage media, including experimental ones such as positioning individual atoms on a surface, are essentially confined to two dimensions" (http://www.guardian.co.uk/science/2012/aug/16/book-written-dna-code?INTCMP=SRCH, accessed 09-09-2012).

Church, Gao, Kosuri, "Next-Generation Digital Information Storage in DNA," Science, August 16, 2012: DOI: 10.1126/science.1226355

♦ When the physical book edition of the Church and Regis book was published by Basic Books in October 2012 I acquired a copy. On pp. 269-272 the printed book contained an unusual "afterward", apparently written by Church, entitled "Notes: On Encoding This Book into DNA."  This discussed "some of the legal, policy, biosafety, and other issues and opportunities" pertaining to the process.  The ideas discussed were so distinctive and original that I would have liked to quote it in its entirety but that would have been an infringement of copyright. The section ended with the following statement:

"For more information, and to explore the possibility of getting your own DNA copy of this book, please visit http://periodicplayground.com."  

When I visited the site on October 20, 2012 I viewed a message from networksolutions.com that the site was "under construction."

View Map + Bookmark Entry

The Human Genome is Packed with At Least 4,000,000 Gene Switches September 6, 2012

On September 6, 2012 ENCODE, the Encyclopedia Of DNA Elements, a project of The National Human Genome Research Institute (NHGRI) of the National Institutes of Health, involving 442 scientists from 32 laboratories around the world, published  six papers in the journal Nature and in 24 papers in Genome Research and Genome Biology.

Among the overall results of the project to date was the monumental conclusion that:

"The human genome is packed with at least four million gene switches that reside in bits of DNA that once were dismissed as “junk” but that turn out to play critical roles in controlling how cells, organs and other tissues behave. The discovery, considered a major medical and scientific breakthrough, has enormous implications for human health because many complex diseases appear to be caused by tiny changes in hundreds of gene switches" (http://www.nytimes.com/2012/09/06/science/far-from-junk-dna-dark-matter-proves-crucial-to-health.html?pagewanted=all, accessed 09-09-2012).

View Map + Bookmark Entry

The FDA Approves the First Medical Robot for Hospital Use January 26, 2013

"A robot that allows patients to communicate with doctors via a telemedicine system that can move around on its own has just received 510(k) clearance by the FDA (Food and Drug Administration).  

"The robot, called RP-VITA, was created by InTouch Health [Santa Barbara, California] and iRobot [Bedford, Massachusetts] and allows doctors from anywhere in the world to communicate with patients at their hospital bedside via a telemedicine solution through an iPad interface.  

"According to iRobot and InTouch Health, RP-VITA combines the latest from iRobot in autonomous navigation and mobility technology with state-of-the-art telemedicine, and InTouch Health developed telemedicine and electronic health record integration.  

"RP-VITA makes it possible for doctors to have "doctor-to-patient consults, ensuring that the physician is in the right place at the right time and has access to the necessary clinical information to take immediate action."  

The robot is used in ways that scientists have never before seen. In order to not get in the way of other people or objects, it outlines its own environment and utilizes a range of advanced sensors to autonomously move about a crowded space.  

"Irrespective of a doctor's location, using an intuitive iPad® interface allows them to visit patients and communicate with their co-workers with a single click.  

"A clearance from the FDA means that RP-VITA can be used for active patient monitoring in pre-operative, peri-operative, and post-surgical settings, such as prenatal, neurological, psychological, and critical care evaluations and examinations.  

"InTouch Health is selling RP-VITA into the healthcare market as its new top-of-the-line remote presence device." (http://www.medicalnewstoday.com/articles/255457.php, accessed 01-27-2013).

View Map + Bookmark Entry

"The Human Brain Project" is Launched, with the Goal of Creating a Supercomputer-Based Simulation of the Human Brain January 28, 2013

On January 28, 2013 The European Commission announced funding for The Human Brain Project.

From the press release:

"The goal of the Human Brain Project is to pull together all our existing knowledge about the human brain and to reconstruct the brain, piece by piece, in supercomputer-based models and simulations. The models offer the prospect of a new understanding of the human brain and its diseases and of completely new computing and robotic technologies. On January 28, the European Commission supported this vision, announcing that it has selected the HBP as one of two projects to be funded through the new FET Flagship Program.

''Federating more than 80 European and international research institutions, the Human Brain Project is planned to last ten years (2013-2023). The cost is estimated at 1.19 billion euros. The project will also associate some important North American and Japanese partners. It will be coordinated at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, by neuroscientist Henry Markram with co-directors Karlheinz Meier of Heidelberg University, Germany, and Richard Frackowiak of Centre Hospitalier Universitaire Vaudois (CHUV) and the University of Lausanne (UNIL).

The Swiss Contribution

"Switzerland plays a vital role in the Human Brain Project. Henry Markram and his team at EPFL will coordinate the project and will also be responsible for the development and operation of the project’s Brain Simulation Platform. Richard Frackowiak and his team will be in charge of the project’s medical informatics platform; the Swiss Supercomputing Centre in Lugano will provide essential supercomputing facilities. Many other Swiss groups are also contributing to the project. Through the ETH Board, the Swiss Federal Government has allocated 75 million CHF (approximately 60 million Euros) for the period 2013-2017, to support the efforts of both Henry Markram’s laboratory at EPFL and the Swiss Supercomputing Center in Lugano. The Canton of Vaud will give 35 million CHF (28 million Euros) to build a new facility called Neuropolis for in silico life science, and centered around the Human Brain Project. This building will also be supported by the Swiss Confederation, the Rolex Group and third-party sponsors.

"The selection of the Human Brain Project as a FET Flagship is the result of more than three years of preparation and a rigorous and severe evaluation by a large panel of independent, high profile scientists, chosen by the European Commission. In the coming months, the partners will negotiate a detailed agreement with the Community for the initial first two and a half year ramp-up phase (2013-mid 2016). The project will begin work in the closing months of 2013."

View Map + Bookmark Entry

The First 3D Printed Bionic Organ: An Ear May 1, 2013

On May 1, 2013  Manu S. Mannoor and Ziwen Jiang of the Department of Mechanical and Aerospace Engineering at Princeton, Teena James from the Department of Chemical and Biomolecular Engineering at Johns Hopkins, and others published a letter entitled "3D Printed Bionic Ears," in NANO Letters of the American Chemical Society. In this they described and illustrated the first 3D printed bionic organ: an ear. 

"Using 3D printers to create biological structures has become widespread. Printing electronics has made similar advances, particularly for low-cost, low-power disposable items. The first successful combination of these two technologies has recently been reported by a group of researchers at Princeton. They described their methods in a recent issue of ACS NANO Letters. They claim that their new device can receive a wide range of frequencies using a coiled antenna printed with silver nanoparticles. Interfacing their device to actual nerve is the next obvious step, begging the question — can it actually hear?  

"The Princeton researchers previously developed a tattoo composed of a sensor and an antenna that could be fixed to the surface of a tooth. It was made from a combination of silk, gold, and graphene, and had the ability to detect small amounts of bacteria. Building on their knowledge, that team joined up with researchers at Johns Hopkins to build the electronic ear. Their 3D printer combined calf cells with a hydrogel matrix material to form the ear cartilage, and silver to form the embedded antenna coil.

"In testing, they were able to pick up radiowaves in stereo using complimentary left and right side ears. Later on they hope to be able to detect acoustic energy directly using other built-in sensors. There are many ways this might be accomplished, the trick is to find a pressure-sensitive material that can be easily printed. Other researchers have used 3D printing of a material called carbomorph to create piezoresistive sensors that change resistance when bent or stressed. These researchers have also been able to print capacitive button sensors to measure changes in capacitance, and even connectors for hooking things together.  

"Printing biological structures that will be stable over time is a tricky business. The first stable 3D-printed ear was achieved not too long ago by researchers at Cornell using a very similar method. Since then, advances in bioprinting have progressed to ever smaller scales, culminating recently with a technique called 3D microdroplet printing. Using synthetic cell microdroplets, researchers could lay down geometrically precise tissues composed of human stem cells. These droplets could then undergo secondary developmental changes to their structure.  

"The bionic ear is a long way from something that might be used in humans, if that is even the intent of the authors. Successful printing of organs and tissues larger than just a cartilaginous ear will require supporting elements for bloodflow and nervous enervation. A test device for printed tissues and organs that might include these essential primitives will undoubtedly be needed soon. It may eventually come to resemble some kind of living proto-humanoid machine — and would probably be a little creepy-looking. However, asking lab animals to shoulder our test burden, may hopefully soon no longer be necessary" (http://www.extremetech.com/extreme/154893-researchers-create-worlds-first-3d-printed-bionic-organ, accessed 05-27-2013).

View Map + Bookmark Entry

The First NeuroGaming Conference Takes Place May 1 – May 2, 2013

On May 1-2, 2013 the first NeuroGaming Conference and Expo took place at the YetiZen Innovation Lab, 540 Howard St., San Francisco. It was organized by Zack Lynch, founder of the Neurotechnology Industry Organization. Three hundred people attended.

View Map + Bookmark Entry

The U.S. Supreme Court Rules that Genes Cannot be Patented June 13, 2013

On June 13, 2013 the US Supreme Court in Association for Molecular Pathology et al v. Myriad Genetics, unanimously struck down the patents held by Myriad Genetics of Salt Lake City, Utah, on the DNA comprising BRCA1 and BRCA2. In their abnormal forms these two genes dispose women to a dramatically heightened risk of breast and/or ovarian cancer.  Myriad Genetics had located the two genes, extracted them from the chromosomes housing them, and had obtained the patents on the genes once they were isolated from the human body. 

"The patents controlled by Myriad entitled the company to exclude all others from using the isolated DNA in breast cancer research, diagnostics, and treatment. The plaintiffs—who originally included biomedical scientists and clinicians, advocates for women’s health, and several women with or at risk for breast cancer—held that Myriad’s enforcement of its patents interfered with the progress of science and the delivery of medical services. They contended that genes, even if isolated, were legally ineligible for patents and that well-established tenets of patent law precluded the grant to any person or institution of a monopoly over a substance so essential to life, health, and science as human DNA" (Kevles, Daniel J. "The Genes you Can't Patent," New York Review of Books, September 26, 2013).

View Map + Bookmark Entry

A Genetic Link to Skin Cancer is Found by Data Mining of Patient Records November 24, 2013

In a paper published in Nature Biotechnology on November 24, 2013 thirty-six researchers lead by Joshua Denny, associate professor of biomedical informatics and medicine at Vanderbilt University, showed that data mining of electronic patient records is more cost-effective and faster than comparing the genomes of thousands of people with a disorder to the genomes of who people who don't have the disorder.

"To identify previously unknown relationships between disease and DNA variants, Denny and colleagues grouped around 15,000 billing codes from medical records into 1,600 disease categories. Then, the researchers looked for associations between disease categories and DNA data available in each record.

"Their biggest new findings all involved skin diseases (just a coincidence, says Josh Denny, the lead author): non melanoma skin cancer and two forms of skin growths called keratosis, one of which is pre-cancerous. The team was able to validate the connection between these conditions and their associated gene variants in other patient data.

"Unlike the standard method of exploring the genetic basis of disease, electronic medical records (EMRs) allows researchers to look for genetic associations of many different diseases at once, which could lead to a better understanding of how some single genes may affect multiple characteristics or conditions. The approach may also be less biased than disease-specific studies.

"The study examined 13,000 EMRs, but in the future, similar studies could look benefit from much larger data sets. While not all patient records contain the genetic data needed to drive this kind of research, that is expected to change now that DNA analysis has become faster and more affordable in recent years and more and more companies and hospitals offer genetic analysis as part of medical care. When researchers have millions of EMRs at their finger tips, more subtle and complex effects of genes on disease and health could come to light. For example, it could allow for important studies on the genetics of drug side effects, which can be rare, affecting maybe 1 in 10,000 patients, Denny says" (http://www.technologyreview.com/view/521986/genetic-link-to-skin-cancer-found-in-medical-records/, accessed 11-25-2013).

Denny et al, "Systematic comparison of phenome-wide association study of electronic medical record data and genome-wide association study data," Nature biotechnology (2013)doi:10.1038/nbt.2749 

View Map + Bookmark Entry

IBM Launches "Watson Discovery Advisor" to Hasten Breakthroughs in Scientific and Medical Research August 27, 2014

On August 27, 2014 IBM launched Watson Discovery Advisor, a computer system that could quickly identify patterns in massive amounts of data, with the expectation that this system would hasten breakthroughs in science and medical research. The computer system, which IBM made available through the cloud, understood chemical compound interaction and human language, and could visually map out connections in data. The system used  a number of computational techniques to deliver its results, including natural language processing, machine learning and hypothesis generation, in which a hypothesis is created and evaluated by a number of different analysis techniques. Baylor College of Medicine used the service to analyze 23 million abstracts of medical papers in order to find more information on the p53 tumor protein, in search of more information on how to turn it on or off. From these results, Baylor researchers identified six potential proteins to target for new research. Using traditional methods it typically took researchers about a year to find a single potentially useful target protein, IBM said.

According to an article by Reuters published in The New York Times,

"Some researchers and scientists have already been using Watson Discovery Advisor to sift through the sludge of scientific papers published daily.

"Johnson & Johnson is teaching the system to read and understand trial outcomes published in journals to speed up studies of effectiveness of drugs.

"Sanofi, a French pharmaceutical company is working with Watson to identify alternate uses for existing drugs.

" 'On average, a scientist might read between one and five research papers on a good day,' said Dr. Olivier Lichtarge, investigator and professor of molecular and human genetics, biochemistry and molecular biology at Baylor College of Medicine.

"He used Watson to automatically analyze 70,000 articles on a particular protein, a process which could have taken him nearly 38 years.

“ 'Watson has demonstrated the potential to accelerate the rate and the quality of breakthrough discoveries,' he said."

View Map + Bookmark Entry