4406 entries. 94 themes. Last updated December 26, 2016.

2010 to 2012 Timeline


"Atlas of Science: Visualizing What We Know" 2010

In 2010 Katy Börner

Professor of Information Science at the School of Library and Information Science, Adjunct Professor at the School of Informatics and Computing, Adjunct Professor at the Department of Statistics in the College of Arts and Sciences, Core Faculty of Cognitive Science, Research Affiliate of the Center for Complex Networks and Systems Research and Biocomplexity Institute, Member of the Advanced Visualization Laboratory, Leader of the Information Visualization Lab, and Founding Director of the Cyberinfrastructure for Network Science Center at Indiana University in Bloomington, IN and Visiting Professor at the Royal Netherlands Academy of Arts and Sciences (KNAW) in The Netherlands,

published Atlas of Science: Visualizing What We Know in Cambridge, Massachusetts at MIT Press. This spectacular full color oblong folio book with an historical orientation represented one of the most original and certainly one of the most beautiful scholarly works ever published in the history of science. It grew out of an exhibit, Places & Spaces: Mapping Science, a ten year exhibition program that went public in 2005. This exhibit, which pulled together the work of numerous collaborators, was

"meant to inspire cross-disciplinary discussion on how to best track and communicate human activity and scientific progress on a global scale. It has two components: the physical part supports the close inspection of high quality reproductions of maps for display at conferences and education centers; the online counterpart provides links to a selected series of maps and their makers along with detailed explanations of how these maps work. The exhibit is a 10-year effort. Each year, 10 new maps are added resulting in 100 maps total in 2014."

View Map + Bookmark Entry

"Cartographies of Time: A History of the Timeline" 2010

In 2010 American intellectual historians Daniel Rosenberg and Anthony Grafton issued Cartographies of Time through the Princeton Architectural Press in New York. A fully illustrated history of graphic visualization of time and timelines from circa 400 CE to the present, this beautifully designed and produced work is an authoritative contribution to the history of information graphics with respect to time. In writing this work the authors drew attention to an aspect of historical writing that is generally undervalued by historians:

"Another reason for the gap in our historical and theoretical understanding of timelines is the relatively low status that we generally grant to chronology as a kind of study. Though we use chronologies all the time, and could not do without them, we typically see them as only distillations of complex historical narratives and ideas. Chronologies work, and—as far as most people are concerned—that's enough. But, as we will show in this book, it wasn't always so; from the classical period to the Renaissance in Europe, chronology was among the most revered of scholarly pursuits. Indeed, in some respects, it held a status higher than the study of history itself. While history dealt in stories, chronology dealt in facts. Moreover, the facts of chronology had significant implications outside of the academic study of history. For Christians, getting chronology right was the key to many practical matters such as knowing when to cleebrate Easter and weightly ones such as knowing when the Apocalypse was nigh.

"Yet, as historian Hayden White has argued, despite the clear cultural importance of chronology, it has been difficult to induce Western historians to think of it as anything more than a rudimentary form of historiography. The traditional account of the birth of modern historical thinking traces a path from the enumerated (but not yet narrated) medieval date lists called annals, through the narrated (but not yet narrative) accounts called chronicles, to fully narrative forms of historiography that emerge with modernity itself. According to this account, for something to qualify as historiography, it is not enough that it 'deal in real, rather than merely imaginery, events; and it is not enough that [it represent] events in its order of discourse according to the chronological framework in which they originally occurred. The events must be...revealed as possessing a structure, an order of meaning, that they do not possess as mere sequence. Long thought of as 'mere sequences,' in our histories of history, chronologies have usually been left out.

"But, as White argues, there is nothing 'mere' in the problem of assembiing coherent chronologies nor their visual analogues. Like their modern successors, traditional chronographic forms performed both rote historical work and heavy conceptual lifting. They assembled, selected, and organized diverse bits of historical information in the form of dated lists. And the chronologies of a given period may tell us as much about its visions of past and future as do its historical narratives" (Rosenberg & Grafon p. 11).

View Map + Bookmark Entry

Biological Journals to Require Data-Archiving January 2010

"To promote the preservation and fuller use of data, The American Naturalist, Evolution, the Journal of Evolutionary Biology, Molecular Ecology, Heredity, and other key journals in evolution and ecology will soon introduce a new data‐archiving policy. The policy has been enacted by the Executive Councils of the societies owning or sponsoring the journals. For example, the policy of The American Naturalist will state:  

"This journal requires, as a condition for publication, that data supporting the results in the paper should be archived in an appropriate public archive, such as GenBank, TreeBASE, Dryad, or the Knowledge Network for Biocomplexity. Data are important products of the scientific enterprise, and they should be preserved and usable for decades in the future. Authors may elect to have the data publicly available at time of publication, or, if the technology of the archive allows, may opt to embargo access to the data for a period up to a year after publication. Exceptions may be granted at the discretion of the editor, especially for sensitive information such as human subject data or the location of endangered species.  

"This policy will be introduced approximately a year from now, after a period when authors are encouraged to voluntarily place their data in a public archive. Data that have an established standard repository, such as DNA sequences, should continue to be archived in the appropriate repository, such as GenBank. For more idiosyncratic data, the data can be placed in a more flexible digital data library such as the National Science Foundation–sponsored Dryad archive at http://datadryad.org"  (http://www.journals.uchicago.edu/doi/full/10.1086/650340, accessed 01-22-2010).

View Map + Bookmark Entry

"The Never-Ending Language Learning System" January 2010

Supported by DARPA and Google, in January 2010 Tom M. Mitchell and his team at Carnegie Mellon University initiated the Never-Ending Language Learning System, or NELL, in an effort to develop a method for machines to teach themselves semantics, or the meaning of language.

"Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day" (http://www.nytimes.com/2010/10/05/science/05compute.html?_r=1&hpw). 

"NELL has been in continuous operation since January 2010. For the first 6 months it was allowed to run without human supervision, learning to extract instances of a few hundred categories and relations, resulting in a knowledge base containing approximately a third of a million extracted instances of these categories and relations. At that point, it had improved substantially its ability to read three quarters of these categories and relations (with precision in the range 90% to 99%), but it had become inaccurate in extracting instances of the remaining fourth of the ontology (many had precisions in the range 25% to 60%).  

"The estimated precision of the beliefs it had added to its knowledge base at that point was 71%. We are still trying to understand what causes it to become increasingly competent at reading some types of information, but less accurate over time for others. Beginning in June, 2010, we began periodic review sessions every few weeks in which we would spend about 5 minutes scanning each category and relation. During this 5 minutes, we determined whether NELL was learning to read it fairly correctly, and in case not, we labeled the most blatant errors in the knowledge base. NELL now uses this human feedback in its ongoing training process, along with its own self-labeled examples. In July, a spot test showed the average precision of the knowledge base was approximately 87% over all categories and relations. We continue to add new categories and relations to the ontology over time, as NELL continues learning to populate its growing knowledge base" (http://rtw.ml.cmu.edu/rtw/overview, accessed 10-06-2010).

View Map + Bookmark Entry

"Whatever Happened to Second Life?" January 4, 2010

On January 4, 2010 Barry Collins, news, features, and online editor of PCPro wrote in PCPro.co.uk "Whatever Happened to Second Life?"

"Three years ago, I underwent one of the most eye-opening experiences of my life – and I barely even left the office.  

"I spent a week virtually living and breathing inside Second Life: the massively multiplayer online world that contains everything from lottery games to libraries, penthouses to pubs, skyscrapers to surrogacy clinics.

"Oh, and an awful lot of virtual sex.  

"Back then, the world and his dog were falling over themselves to “be a part of it”. Rock stars were queuing up to play virtual gigs, Microsoft and IBM were setting up elaborate pixellated offices to host staff training seminars, Reuters even despatched a correspondent to report back on the latest in-world developments.

"At its peak, the Second Life economy had more money swilling about than several third-world countries. It had even produced its own millionaire, Anshe Chung, who made a very real fortune from buying and selling property that existed only on Second Life servers.  

"Three years on, and the hype has been extinguished. Second Life has seen its status as the web wonderchild supplanted by Facebook and Twitter. The newspapers have forgotten about it, the Reuters correspondent has long since cleared his virtual desk, and you can walk confidently around tech trade shows without a ponytailed “Web 2.0 Consultant” offering to put your company on the Second Life map for the price of a company car.  "

"But what has happened to Second Life? Have the hundreds of thousands of registered players logged off and found a real life? Has the Second Life economy collapsed? And what’s become of the extroverts, entrepreneurs and evangelists I encountered on my first visit? There’s only one way to find out. I’m going back in."

"Has Second Life become a digital ghost town? Not according to its makers, Linden Labs. 'In total, users around the world have spent more than one billion hours in Second Life,' the company claimed in September 

"And it isn’t just using that big figure to distract attention from a slowing interest in the online world: 'user hours grew 33% year-on-year to an all-time high of 126 million in Q2 2009,' Linden insists."

"A little research soon reveals why Second Life seems a lot quieter than the numbers suggest. In June, the company opened Zindra – Second Life’s 'adult continent', a huge plot of the virtual universe dedicated to content rated as 'mature', 'adult' or even 'PG'.  

"Given that sex and gambling accounted for the majority of the 'most popular places' when I first visited, it was suddenly apparent why I was as lonely as a cloud in the parts of the Second Life universe that wouldn’t upset the clergy.  

"So why did Linden establish its very own red-light district? It seems the company decided it was time to clean up its act. In 2008, a management shake-up saw founder and CEO Philip Rosedale move into the role of chairman; his replacement was Mark Kingdon, a man who spent 12 years as a partner at PriceWaterhouseCoopers – about as far from Linden’s 'anything goes' culture as you could possibly get."

"Kingdon apparently realised that companies such as IBM (which has more than 50 in-game properties) and Microsoft don’t want their reputations sullied by being part of a virtual world where XXX DANA’S NAUGHTY PLAYHOUSE XXX is the star attraction.

"So instead of bulldozing the sex shops and brothels, Linden decided to relocate them to their own dedicated island. Now Big Blue and the blue-movie theatres can both comfortably entertain their clients, and never the twain shall meet.

"Other vices were quashed a little less amicably. In 2007, Linden caused enormous upset after shutting down casinos and other in-world gambling dens overnight, following an FBI investigation into whether the site was breaking the US ban on online gambling. People who’d invested enormous amounts of time and hard cash into developing their own casinos found they’d literally been wiped off the map, without compensation" (http://www.pcpro.co.uk/features/354457/whatever-happened-to-second-life/1, accessed 01-27-2010).

View Map + Bookmark Entry

"The World's First Full-Size Robotic Girlfriend" January 9, 2010

On January 9, 2010 Artificial intelligence engineer Douglas Hines of TrueCompanion.com introduced Roxxxy at the AVN Adult Entertainment Expo in Las Vegas, Nevada.

" 'She doesn't vacuum or cook, but she does almost everything else,' said her inventor, Douglas Hines, who unveiled Roxxxy last month at the Adult Entertainment Expo in Las Vegas, Nevada.

"Lifelike dolls, artificial sex organs and sex-chat phone lines have been keeping the lonely company for decades. But Roxxxy takes virtual companionship to a new level. Powered by a computer under her soft silicone skin, she employs voice-recognition and speech-synthesis software to answer questions and carry on conversations. She even comes loaded with five distinct 'personalities,' from Frigid Farrah to Wild Wendy, that can be programmed to suit customers' preferences.

" 'There's a tremendous need for this kind of product,' said Hines, a computer scientist and former Bell Labs engineer. Roxxxy won't be available for delivery for several months, but Hines is taking pre-orders through his Web site, TrueCompanion.com, where thousands of men have signed up. 'They're like, 'I can't wait to meet her,' ' Hines said. 'It's almost like the anticipation of a first date.' Women have inquired about ordering a sex robot, too. Hines says a female sex therapist even contacted him about buying one for her patients.

"Roxxxy has been like catnip to talk-show hosts since her debut at AEE, the largest porn-industry convention in the country. In a recent monologue, Jay Leno expressed amazement that a sex robot could carry on lifelike conversations and express realistic emotions. 'Luckily, guys,' he joked, 'there's a button that turns that off.' Curious conventioneers packed Hines' AEE booth last month in Las Vegas, asking questions and stroking Roxxxy's skin as she sat on a couch in a black negligee.

" 'Roxxxy generated a lot of buzz at AEE,' said Grace Lee, spokeswoman for the porn-industry convention. 'The prevailing sentiment of everyone I talked to about Roxxxy is 'version 1.0,' but people were fascinated by the concept, and it caused them to rethink the possibilities of 'sex toys.' '

"Hines, a self-professed happily married man from Lincoln Park, New Jersey, says he spent more than three years developing the robot after trying to find a marketable application for his artificial-intelligence technology. Roxxxy's body is made from hypoallergenic silicone -- the kind of stuff in prosthetic limbs -- molded over a rigid skeleton. She cannot move on her own but can be contorted into almost any natural position. To create her shape, a female model spent a week posing for a series of molds. The robot runs on a self-contained battery that lasts about three hours on one charge, Hines says. Customers can recharge Roxxxy with an electrical cord that plugs into her back.

"A motor in her chest pumps heated air through a tube that winds through the robot's body, which Hines says keeps her warm to the touch. Roxxxy also has sensors in her hands and genital areas -- yes, she is anatomically correct -- that will trigger vocal responses from her when touched. She even shudders to simulate orgasm. When someone speaks to Roxxxy, her computer converts the words to text and then uses pattern-recognition software to match them against a database containing hundreds of appropriate responses. The robot then answers aloud -- her prerecorded 'voice' is supplied by an unnamed radio host -- through a loudspeaker hidden under her wig.

" 'Everything you say to her is processed. It's very near real time, almost without delay,' Hines said of the dynamics of human-Roxxxy conversation. 'To make it as realistic as possible, she has different dialogue at different times. She talks in her sleep. She even snores.' (The snoring feature can be turned off, he says.) Roxxxy understands and speaks only English for now, but Hines' True Companion company is developing Japanese and Spanish versions. For an extra fee, he'll also record customizable dialogue and phrases for each client, which means Roxxxy could talk to you about NASCAR, say, or the intricacies of politics in the Middle East" (http://www.cnn.com/2010/TECH/02/01/sex.robot/, accessed 02-06-2010).

In December 2013 I revisited the Truecompanion.com website, which then advertised Roxxxy as "World's First Sex Robot: Always Turned on and Ready to Talk or Play." By then the company had diversified into three models of female sex robots, and was planning to introduce Rocky, a male sex robot: "Rocky is described as everyone's dream date! – just imagine putting together a great body along with a sparkling personality where your man is focused on making you happy!"

View Map + Bookmark Entry

After the Earthquake in Haiti, Donating by SMS Text January 13, 2010

After the disastrous earthquake in Haiti you could send aid money by text message on your cell phone, and $10 was put on your cell phone bill. In the case of the Red Cross you could "send a $10 Donation by Texting ‘Haiti’ to 90999", or you could donate by phone or by credit card on the Red Cross website, or through social networking sites.

View Map + Bookmark Entry

World Texting Competition is Won by Koreans January 14, 2010

The first LG Mobile Worldcup SMS texting championship took place in New York on January 14, 2010.

“ 'When others watch me texting, they think I’m not that fast and they can do better,' said Mr. Bae, 17, a high school dropout who dyes his hair a light chestnut color and is studying to be an opera singer.'So far, I’ve never lost a match.'

"In the New York competition he typed six characters a second. 'If I can think faster I can type faster,' he said.

"The inaugural Mobile World Cup, hosted by the South Korean cellphone maker LG Electronics, brought together two-person teams from 13 countries who had clinched their national titles by beating a total of six million contestants. Marching behind their national flags, they gathered in New York on Jan. 14 for what was billed as an international clash of dexterous digits" (http://www.nytimes.com/2010/01/28/world/asia/28seoul.html, accessed 01-28-2010).

View Map + Bookmark Entry

Exploit Code for Attacks on Google Released on the Internet January 15, 2010

"Exploit code for the zero-day hole in Internet Explorer linked to the China-based attacks on Google and other companies has been released on the Internet, Microsoft and McAfee warned on Friday.

"Meanwhile, the German federal security agency issued a statement on Friday urging its citizens to use an alternative browser to IE until a patch arrives.  

" 'We still only see limited targeted attacks affecting Internet Explorer 6,' Jerry Bryant, senior security program manager lead at the Microsoft Security Response Center, said in a statement. 'While newer versions of Internet Explorer are affected by this vulnerability, mitigations exist that make exploitation much more difficult.'

"McAfee researchers have seen references to the code on mailing lists and confirmed that it has been published on at least one Web site, the company's Chief Technology Officer George Kurtz wrote in his blog. 'The exploit code is the same code that McAfee Labs had been investigating and shared with Microsoft earlier this week,' he said.

" 'The public release of the exploit code increases the possibility of widespread attacks using the Internet Explorer vulnerability,' Kurtz wrote. 'The now-public computer code may help cybercriminals craft attacks that use the vulnerability to compromise Windows systems. Popular penetration testing tools are already being updated to include this exploit.' Microsoft issued a warning on Thursday about the new hole and said it was working on a patch. The vulnerability affects IE 6, 7 and 8 on all the modern versions of Windows, including Windows 7, according to Microsoft's advisory. Microsoft said IE 6 was the browser version being used on the computers that were targeted in the attacks. Google disclosed the attacks targeting it and other U.S. companies on Tuesday and said the attacks originated in China. Human rights activists who use Gmail also were targeted, Google said.

"The company said it discovered the attacks in mid-December and while it did not specifically implicate the Chinese government, it says that as a result of the incidents, it may withdraw from doing business in China. Sources familiar with the attack code say the attacks are similar to previous attacks on U.S. corporations that were linked to the Chinese government or proxies operating for the government. Source code was stolen from some of the more than 30 Silicon Valley companies targeted in the attack, sources said. Adobe has confirmed that it was targeted by an attack, and sources have said Yahoo, Symantec, Juniper Networks, Northrop Grumman, and Dow Chemical also were targets.

"McAfee says references in the IE-related attack code it analyzed indicate that the attackers called the operation 'Aurora' and that the attack was extremely sophisticated" (http://news.cnet.com/8301-27080_3-10436083-245.html, accessed 01-16-2010).

View Map + Bookmark Entry

Steve Jobs Introduces the iPad, the First Widely Sold Tablet Computer January 27, 2010

On January 27, 2010 Steve Jobs of Apple introduced the iPad, the first widely sold tablet computer. The first iPad was one-half inch thick, with a 9.7 inch, high resolution color touchscreen (multi-touch) diagonal display, powered by a 1-gigahertz Apple A4 chip and 16 to 64 gigabytes of flash storage, weighing 1.5 pounds and capable of running all iPhone applications, except presumably, the phone. The battery life was supposed to be 10 hours, and the device was supposed to hold a charge for 1 month in standby. The price started at $499.00.

"The new device will have to be far better than the laptop and smartphone at doing important things: browsing the Web, doing e-mail, enjoying and sharing photographs, watching videos, enjoying your music collection, playing games, reading e-books. Otherwise, 'it has no reason for being.'" (http://bits.blogs.nytimes.com/2010/01/27/live-blogging-the-apple-product-announcement/?hp, accessed 01-27-2010).

View Map + Bookmark Entry

"Assessing the Future Landscape of Scholarly Communication. . . " February 2010

In February 2010 biosocial anthropologist Diane Harley, director of the Higher Education in the Digital Age (HEDA) project at the Center for Studies in Higher Education at the University of California at Berkeley, and colleagues, published Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines.

"Since 2005, the Center for Studies in Higher Education (CSHE)... has been conducting research to understand the needs and practices of faculty for in-progress scholarly communication (i.e., forms of communication employed as research is being executed) as well as archival publication. The complete results of our work can be found at the Future of Scholarly Communication’s project website. This report brings together the responses of 160 interviewees across 45, mostly elite, research institutions to closely examine scholarly needs and values in seven selected academic fields: archaeology, astrophysics, biology, economics, history, music, and political science.

"The report is divided into eight chapters and can be read in its entirety online (733 pages) or can be downloaded in a PDF file, as can any individual chapter" (http://escholarship.org/uc/cshe_fsc, accessed 02-12-2010).

View Map + Bookmark Entry

Liberté ou la Mort: Julia Gaffield Discovers Haiti's Declaration of Independence February 2010

In February 201 Julia Gaffield, a Canadian graduate student at Duke University discovered in the British National archives the only known copy of the declaration of independence for Haiti, an 8-page pamphlet headed Liberté ou La Mort

Prior to Gaffield's discovery no copy of this document was known.

View Map + Bookmark Entry

Social Media Interviews the President February 1, 2010

On February 1, 2010 Steve Grove, Head of News and Politics at YouTube, interviewed President Barack Obama on YouTube's, CitizenTube.com:

"The President responded to your questions in a live YouTube interview at the White House on Monday, February 1st.

"You submitted over 11,000 questions and cast over 667,000 votes after the President's State of the Union address last week. We collected the top questions, ensuring we covered a range of issues, minimized duplicate questions, and included both video and text submissions" (http://www.youtube.com/user/citizentube#p/c/EB843ABAF59735FD, accessed 02-02-2010).

This was the first time that a sitting president was interviewed by social media, rather than broadcast news media.

View Map + Bookmark Entry

Modifiable eBook Editions of Textbooks February 22, 2010

Macmillan announced on February 22, 2010 that it was introducing new software called DynamicBooks, which would let college professors curate ebooks for their own courses. They could add paragraphs, bring in extra sources, links, and updates—without having to consult with the original author.

According to the New York Times, students will be able to purchase the books at their local university stores, as well as dynamicbooks.com and through CourseSmart, an e-textbook seller. The company is also working with Apple so students can access the books on the iPad. In August, they will offer 100 titles.

"The modifiable e-book editions will be much cheaper than traditional print textbooks. “Psychology,” for example, which has a list price of $134.29 (available on Barnes & Noble’s Web site for $122.73), will sell for $48.76 in the DynamicBooks version. Macmillan is also offering print-on-demand versions of the customized books, which will be priced closer to traditional textbooks.  

"Fritz Foy, senior vice president for digital content at Macmillan, said the company expected e-book sales to replace the sales of used books. Part of the reason publishers charge high prices for traditional textbooks is that students usually resell them in the used market for several years before a new edition is released. DynamicBooks, Mr. Foy said, will be “semester and classroom specific,” and the lower price, he said, should attract students who might otherwise look for used or even pirated editions" (http://www.nytimes.com/2010/02/22/business/media/22textbook.html?scp=1&sq=publishing%2002/22/2010&st=cse, accessed 02-23-2010).

View Map + Bookmark Entry

The First Superman Comic Book sells for $1,000,000. February 22, 2010

On February 22, 2010 the web auction site ComicConnect.com in New York sold the first edition of the first Superman comic book, Action Comics #1, for $1,000,000.

"ComicConnect.com, one of the industry’s leading online auction/consignment sites, just sold an extremely rare, top-condition copy of the world’s most coveted comic book for exactly $1,000,000. That figure is more than three times higher than the prior record-holder, also set by ComicConnect.com.

"That comic book, of course, is Action Comics #1, which marked the debut of Superman in 1938 and promptly changed the course of pop culture forever.

" 'This particular copy has been in a private collection for more than 15 years, and it’s likely to disappear again once it’s been turned over to its new owner. However, ComicConnect.com will allow the media to view it briefly in its New York City showroom (873 Broadway, Suite 201, 212-895-3999). The showroom is also home to ComicConnect.com’s affiliate, Metropolis Collectibles (metropoliscomics.com), the largest vintage comic book dealer in the world.

" 'It’s the Holy Grail of comic books,' says founder Stephen Fishler, one of the leading experts on collectible comics.

“ 'Before Action Comics #1, there was no such thing as a superhero or a man who could fly,' notes Fishler, who created the 10-point grading scale which today is used universally to evaluate the condition of comic books.

“ 'It’s the single most important event in comic book history,' adds ComicConnect.com co-owner and COO, Vincent Zurzolo.

"Only about 100 copies Action Comics #1 remain in existence, and of those 100, only two have received a grading of 8.0 (Very Fine) or higher. This particular book is one of them, making it among the rarest of the rare.

"Up until now, the record-holder was another Action Comics #1, this one with a grading of 6.0. It sold on ComicConnect.com for $317,200 in 2009.

"According to the Overstreet Price Guide to Comic Books—the industry bible—Action Comics #1 is indisputably the highest-valued comic book of all time. In second place is Detective Comics #27, which marked the first appearance of Batman in 1939. An Action Comics #1 graded 8.0 or higher is priced about 25% higher than a comparable Detective Comics #27" (http://www.comicconnect.com/, accessed 02-25-2010).

View Map + Bookmark Entry

The Sociology of Wikipedians March 2010

A joint study of the Wikipedia contributor population by the United Nations University and the Maastricht Economical and social Research and training centre on Innovation and Technology (UNU-MERIT) suggested on March 2010 that Wikipedians 9contributors to the Wikipedia) were over 85% male and in their mid-20s.

View Map + Bookmark Entry

The First Brain-Computer Interface Product Offered for Sale March 2 – March 6, 2010

At the CeBit exhibition in Hannover, Germany March 2-6, 2010, Christoph Guger of Guger Technologies (g.tech) of Graz, Austria, offered intendiX, "the world's first personal Brain Computer Interface speller" for sale at the retail price of €9000.

"The world’s first patient-ready and commercially available brain computer interface just arrived at CeBIT 2010. The Intendix from Guger Technologies (g*tec) is a system that uses an EEG cap to measure brain activity in order to let you type with your thoughts. Meant to work with those with locked-in syndrome, or other disabilities, Intendix is simple enough to use after just 10 minutes of training. You simply focus on a grid of letters as they flash. When your desired letter lights up, brain activity spikes and Intendix types it. As users master the system, a few will be able to type as quickly as 1 letter a second. Besides typing, it can also trigger alarms, convert text to speech, print, copy, or email" (http://singularityhub.com/2010/03/07/intendix-the-brain-computer-interface-goes-commercial-video/, accessed 03-16-2010).

♦ In December 2013 a video of intendiX in operation, entitled Select words by thinking—world record, was available from YouTube at this link.

View Map + Bookmark Entry

Google Pulls its Search Engine Out of Mainland China March 22, 2010

Google announced in its blog on March 22, 2010 that it stopped censoring search services on Google.cn, and moved its Chinese search business from Google.cn to Google.com.hk.

"Users visiting Google.cn are now being redirected to Google.com.hk, where we are offering uncensored search in simplified Chinese, specifically designed for users in mainland China and delivered via our servers in Hong Kong. Users in Hong Kong will continue to receive their existing uncensored, traditional Chinese service, also from Google.com.hk. Due to the increased load on our Hong Kong servers and the complicated nature of these changes, users may see some slowdown in service or find some products temporarily inaccessible as we switch everything over" (http://googleblog.blogspot.com/2010/03/new-approach-to-china-update.html, accessed 03-22-2010)

View Map + Bookmark Entry

The Vatican Library Plans the Scanning of all its Manuscripts into the FITS Document Format March 24, 2010

"An initiative of the Vatican Library Digital manuscripts

"by Cesare Pasini  

"The digitization of 80,000 manuscripts of the Vatican Library, it should be realized, is not a light-hearted project. Even with only a rough calculation one can foresee the need to reproduce 40 million pages with a mountain of computer data, to the order of 45 petabytes (that is, 45 million billion bytes). This obviously means pages variously written and illustrated or annotated, to be photographed with the highest definition, to include the greatest amount of data and avoid having to repeat the immense undertaking in the future.  

"And these are delicate manuscripts, to be treated with care, without causing them damage of any kind. A great undertaking for the benefit of culture and in particular for the preservation and conservation of the patrimony entrusted to the Apostolic Library, in the tradition of a cultural service that the Holy See continues to express and develop through the centuries, adapting its commitment and energy to the possibilities offered by new technologies.  

"The technological project of digitization with its various aspects is now ready. In the past two years, a technical feasibility study has been prepared with the contribution of the best experts, internal, external and also international. This resulted in a project of a great and innovative value from various points of view: the realization of the photography, the electronic formats for conservation, the guaranteed stability of photographs over time, the maintenance and management of the archives, and so forth.  

"This project may be achieved over a span of 10 years divided into three phases, with possible intervals between them. In a preliminary phase the involvement of 60 people is planned, including photographers and conservator-verifiers, in the second and third phases at least 120. Before being able to initiate an undertaking of this kind, which is causing some anxiety to those in charge of the library (and not only to them!), naturally it will be necessary to find the funds. Moves have already been made in this direction with some positive results.  

"The second announcement is that some weeks ago the “test bed” was set up; in other words the “bench test” that will make it possible to try out and examine the whole structure of the important project that has been studied and formulated so as to guarantee that it will function properly when undertaken in its full breadth.  

"The work of reproduction uses two different machines, depending on the different types of material to be reproduced: one is a Metis Systems scanner, kindly lent to us free of charge by the manufacturers, and a 50 megapixel Hasselblad digital camera. Digitized images will be converted to the Flexible Image Transport System (FITS), a non-proprietary format, is extremely simple, was developed a few decades ago by NASA. It has been used for more than 40 years for the conservation of data concerning spatial missions and, in the past decade, in astrophysics and nuclear medicine. It permits the conservation of images with neither technical nor financial problems in the future, since it is systematically updated by the international scientific community.  

"In addition to the servers that collect the images in FITS format accumulated by the two machines mentioned, another two servers have been installed to process the data to make it possible to search for images both by the shelf mark and the manuscript's descriptive elements, and also and above all by a graphic pattern, that is, by looking for similar images (graphic or figurative) in the entire digital memory.  

"The latter instrument, truly innovative and certainly interesting for all who intend to undertake research on the Vatican's manuscripts – only think of when it will be possible to do such research on the entire patrimony of manuscripts in the Library! – was developed from the technology of the Autonomy Systems company, a leading English firm in the field of computer science, to which, moreover, we owe the entire funding of the “test bed”.  

"For this “bench test”, set up in these weeks, 23 manuscripts are being used for a total of 7,500 digitized and indexed pages, with a mountain of computer data of about 5 terabytes (about 5,000 billion bytes).

"The image of the mustard seed springs to mind: the “text bed” is not much more in comparison with the immensity of the overall project. But we know well that this seed contains an immense energy that will enable it to grow, to become far larger than the other plants and to give hospitality to the birds of the air. In accepting the promise guaranteed in the parable, let us also give hope of it to those who await the results of this project's realization" (http://www.vaticanlibrary.va/home.php?, pag=newsletter_art_00087&BC=11, accessed 03-24-2010).

View Map + Bookmark Entry

The Holy Grail of Holy Grails, Comicbook-wise March 29, 2010

On March 29, 2010 a superfine copy of Action Comics No. 1 from 1938, which features the first appearance of Superman, the "Man of Steel," and was graded 8.5 on the 10 point Fischler scale, was bought by an undisclosed buyer for a record $1.5 million on the online auction site ComicConnect.com.

" 'This is the Holy Grail of Holy Grails,' said Vincent Zurzolo, co-owner of the Web site.

"A copy of the same issue sold for $1 million in February, but this one fetched a higher price because it is in better condition. It was stored inside a movie magazine for the past 50 years, Zurzolo said. 

" 'The book looks like it just came off the presses yesterday,' said Zurzolo. 'The colors are extremely vivid, the whites behind the 'Action Comics' logo are snow white. It's just a stunning copy -- it almost looks brand new.' The sale of the Superman book marks the third time this year that a record was set for the sale of a comic book. The other copy of 'Action Comics' No. 1 held onto its record for only three days before a comic book featuring Batman's debut sold for $75,000 more at an auction in Dallas, Texas. 

"It's widely believed that there are 50 to 100 copies of Action Comics No. 1 floating around, which makes it exceedingly rare. However, the copy sold Monday has received the highest rating to date from the Certified Guaranty Company, an independent comic grading company in Sarasota, Florida. The company inspects comic books for imperfections, ranging from yellowing to slight creases. J.C. Vaughn, the associate publisher of The Overstreet Comic Book Price Guide, an annual publication considered the authority on comic book pricing, said the Action Comic No. 1 book sold Monday is worth every penny.

" 'The older any comic book gets, obviously the more unlikely you'll find it with a high rating,' said Vaughn.

" 'A book this old, featuring Superman's first appearance? I think this book warrants the price.' Back in 1938, there were 200,000 of these first editions printed and 130,000 sold, said Vaughn. The 70,000 other copies were destroyed.

"Zurzolo said it could be a while before another comic book sets a new mark, because only a few other comics have this type of value" (http://www.cnn.com/2010/SHOWBIZ/03/30/superman.comic/, accessed 04-02-2010).

View Map + Bookmark Entry

Probably the First Fully Visually Satisfying Interactive eBook April 5, 2010

On April 5, 2010 Theodore Gray, co-founder of Wolfram Research, makers of Mathematica, as well as a Popular Science columnist, and an element collector, issued the ebook version of his 2009 printed book, The Elements: A Visual Exploration of Every Known Atom in the Universe, for the Apple iPad.

Gray's ebook may have been the first interactive book to take full advantage of the features of the iPad, including splendid high resolution graphics, the ability to rotate objects, the ability to visualize objects in 3-dimensions using inexpensive 3-D glasses, and full connectivity to the Wolfram Alpha knowledge engine for additional data.

♦ In December 2013 a video which Gray discussed the features, design, and production of the ebook, The Elements was available from YouTube at this link

View Map + Bookmark Entry

U.S. Book Sales in 2009: $23.9 Billion April 7, 2010

"The Association of American Publishers (AAP) has today [April 7, 2010] released its annual estimate of total book sales in the United States. The report, which uses data from the Bureau of the Census as well as sales data from eighty-six publishers inclusive of all major book publishing media market holders, estimates that U.S. publishers had net sales of $23.9 billion in 2009, down from $24.3 billion in 2008, representing a 1.8% decrease. In the last seven years the industry had a compound annual growth rate (CAGR) of 1.1%."

"Audio book sales for 2009 totaled $192 million, down 12.9% on the prior year, CAGR (compound annual growth rate) for this category is still healthy at 4.3%. E-books overtook audiobooks in 2009 with sales reaching $313 million in 2009, up 176.6%" (http://www.publishers.org/main/PressCenter/Archicves/2010_April/BookSalesEstimatedat23.9Billionin2009.htm)

View Map + Bookmark Entry

The First Pulitizer Prizes for Internet Journalism April 12, 2010

On April 12, 2010 Sheri Fink, MD, PhD of New York-based ProPublica.org received the Pulitzer Prize in Investigative Reporting for her story, The Deadly Choices at Memorial. The story was published on the Propublica website on August 27, 2009 and co-published in the New York Times Magazine on August 30, 2009.

Political cartoonist Mark Fiore, whose work appears on San Francisco-based SFGate.com, won the Pulitzer Prize for Editorial Cartooning. Fiore produced animated editorial cartoons for publication on the Internet.

These were the first Pulitzer Prizes awarded for Internet-based journalism.

View Map + Bookmark Entry

Google Announces "Replay" for Twitter April 14, 2010

"Since we first introduced real-time search last December, we’ve added content from MySpace, Facebook and Buzz, expanded to 40 languages and added a top links feature to help you find the most relevant content shared on updates services like Twitter. Today, we’re introducing a new feature to help you search and explore the public archive of tweets.  

"With the advent of blogs and micro-blogs, there’s a constant online conversation about breaking news, people and places — some famous and some local. Tweets and other short-form updates create a history of commentary that can provide valuable insights into what’s happened and how people have reacted. We want to give you a way to search across this information and make it useful.  

"Starting today, you can zoom to any point in time and 'replay' what people were saying publicly about a topic on Twitter. To try it out, click 'Show options' on the search results page, then select 'Updates.' The first page will show you the familiar latest and greatest short-form updates from a comprehensive set of sources, but now there’s a new chart at the top. In that chart, you can select the year, month or day, or click any point to view the tweets from that specific time period. . . ." (http://googleblog.blogspot.com/2010/04/replay-it-google-search-across-twitter.html, accessed 05-06-2010).

View Map + Bookmark Entry

The Library of Congress Will Preserve All "Tweets" April 14, 2010

On April 14, 2010 Twitter announced in its blog that it would donate to the Library of Congress its archive of 10,000,000,000 text messages (tweets) accumulated since the founding of the company in October 2006:

"The Library of Congress is the oldest federal cultural institution in the United States and it is the largest library in the world. The Library's primary mission is research and it receives copies of every book, pamphlet, map, print, and piece of music registered in the United States. Recently, the Library of Congress signaled to us that the public tweets we have all been creating over the years are important and worthy of preservation.

"Since Twitter began, billions of tweets have been created. Today, fifty-five million tweets a day are sent to Twitter and that number is climbing sharply. A tiny percentage of accounts are protected but most of these tweets are created with the intent that they will be publicly available. Over the years, tweets have become part of significant global events around the world—from historic elections to devastating disasters.  

"It is our pleasure to donate access to the entire archive of public Tweets to the Library of Congress for preservation and research. It's very exciting that tweets are becoming part of history. It should be noted that there are some specifics regarding this arrangement. Only after a six-month delay can the Tweets be used for internal library use, for non-commercial research, public display by the library itself, and preservation.

"The open exchange of information can have a positive global impact. This is something we firmly believe and it has driven many of our decisions regarding openness. Today we are also excited to share the news that Google has created a wonderful new way to revisit tweets related to historic events. They call it Google Replay because it lets you relive a real time search from specific moments in time.  

"Google Replay currently only goes back a few months but eventually it will reach back to the very first Tweets ever created. Feel free to give Replay a try—if you want to understand the popular contemporaneous reaction to the retirement of Justice Stevens, the health care bill, or Justin Bieber's latest album, you can virtually time travel and replay the Tweets. The future seems bright for innovation on the Twitter platform and so it seems, does the past!"

View Map + Bookmark Entry

"The Data-Driven Life" April 20, 2010

On April 20,, 2010 writer Gary Wolf published "The Data-Driven Life" in The New York Times Magazine:

". . . . Another person I’m friendly with, Mark Carranza — he also makes his living with computers — has been keeping a detailed, searchable archive of all the ideas he has had since he was 21. That was in 1984. I realize that this seems impossible. But I have seen his archive, with its million plus entries, and observed him using it. He navigates smoothly between an interaction with somebody in the present moment and his digital record, bringing in associations to conversations that took place years earlier. Most thoughts are tagged with date, time and location. What for other people is an inchoate flow of mental life is broken up into elements and cross-referenced.  

"These men all know that their behavior is abnormal. They are outliers. Geeks. But why does what they are doing seem so strange? In other contexts, it is normal to seek data. A fetish for numbers is the defining trait of the modern manager. Corporate executives facing down hostile shareholders load their pockets full of numbers. So do politicians on the hustings, doctors counseling patients and fans abusing their local sports franchise on talk radio. Charles Dickens was already making fun of this obsession in 1854, with his sketch of the fact-mad schoolmaster Gradgrind, who blasted his students with memorized trivia. But Dickens’s great caricature only proved the durability of the type. For another century and a half, it got worse.

"Or, by another standard, you could say it got better. We tolerate the pathologies of quantification — a dry, abstract, mechanical type of knowledge — because the results are so powerful. Numbering things allows tests, comparisons, experiments. Numbers make problems less resonant emotionally but more tractable intellectually. In science, in business and in the more reasonable sectors of government, numbers have won fair and square. For a long time, only one area of human activity appeared to be immune. In the cozy confines of personal life, we rarely used the power of numbers. The techniques of analysis that had proved so effective were left behind at the office at the end of the day and picked up again the next morning. The imposition, on oneself or one’s family, of a regime of objective record keeping seemed ridiculous. A journal was respectable. A spreadsheet was creepy.  

"And yet, almost imperceptibly, numbers are infiltrating the last redoubts of the personal. Sleep, exercise, sex, food, mood, location, alertness, productivity, even spiritual well-being are being tracked and measured, shared and displayed. On MedHelp, one of the largest Internet forums for health information, more than 30,000 new personal tracking projects are started by users every month. Foursquare, a geo-tracking application with about one million users, keeps a running tally of how many times players “check in” at every locale, automatically building a detailed diary of movements and habits; many users publish these data widely. Nintendo’s Wii Fit, a device that allows players to stand on a platform, play physical games, measure their body weight and compare their stats, has sold more than 28 million units.  

"Two years ago, as I noticed that the daily habits of millions of people were starting to edge uncannily close to the experiments of the most extreme experimenters, I started a Web site called the Quantified Self with my colleague Kevin Kelly. We began holding regular meetings for people running interesting personal data projects. I had recently written a long article about a trend among Silicon Valley types who time their days in increments as small as two minutes, and I suspected that the self-tracking explosion was simply the logical outcome of this obsession with efficiency. We use numbers when we want to tune up a car, analyze a chemical reaction, predict the outcome of an election. We use numbers to optimize an assembly line. Why not use numbers on ourselves?  

"But I soon realized that an emphasis on efficiency missed something important. Efficiency implies rapid progress toward a known goal. For many self-trackers, the goal is unknown. Although they may take up tracking with a specific question in mind, they continue because they believe their numbers hold secrets that they can’t afford to ignore, including answers to questions they have not yet thought to ask.

"Ubiquitous self-tracking is a dream of engineers. For all their expertise at figuring out how things work, technical people are often painfully aware how much of human behavior is a mystery. People do things for unfathomable reasons. They are opaque even to themselves. A hundred years ago, a bold researcher fascinated by the riddle of human personality might have grabbed onto new psychoanalytic concepts like repression and the unconscious. These ideas were invented by people who loved language. Even as therapeutic concepts of the self spread widely in simplified, easily accessible form, they retained something of the prolix, literary humanism of their inventors. From the languor of the analyst’s couch to the chatty inquisitiveness of a self-help questionnaire, the dominant forms of self-exploration assume that the road to knowledge lies through words. Trackers are exploring an alternate route. Instead of interrogating their inner worlds through talking and writing, they are using numbers. They are constructing a quantified self.  

"UNTIL A FEW YEARS ago it would have been pointless to seek self-knowledge through numbers. Although sociologists could survey us in aggregate, and laboratory psychologists could do clever experiments with volunteer subjects, the real way we ate, played, talked and loved left only the faintest measurable trace. Our only method of tracking ourselves was to notice what we were doing and write it down. But even this written record couldn’t be analyzed objectively without laborious processing and analysis.  "Then four things changed. First, electronic sensors got smaller and better. Second, people started carrying powerful computing devices, typically disguised as mobile phones. Third, social media made it seem normal to share everything. And fourth, we began to get an inkling of the rise of a global superintelligence known as the cloud.

"Millions of us track ourselves all the time. We step on a scale and record our weight. We balance a checkbook. We count calories. But when the familiar pen-and-paper methods of self-analysis are enhanced by sensors that monitor our behavior automatically, the process of self-tracking becomes both more alluring and more meaningful. Automated sensors do more than give us facts; they also remind us that our ordinary behavior contains obscure quantitative signals that can be used to inform our behavior, once we learn to read them."

". . . . Adler’s idea that we can — and should — defend ourselves against the imposed generalities of official knowledge is typical of pioneering self-trackers, and it shows how closely the dream of a quantified self resembles therapeutic ideas of self-actualization, even as its methods are startlingly different. Trackers focused on their health want to ensure that their medical practitioners don’t miss the particulars of their condition; trackers who record their mental states are often trying to find their own way to personal fulfillment amid the seductions of marketing and the errors of common opinion; fitness trackers are trying to tune their training regimes to their own body types and competitive goals, but they are also looking to understand their strengths and weaknesses, to uncover potential they didn’t know they had. Self-tracking, in this way, is not really a tool of optimization but of discovery, and if tracking regimes that we would once have thought bizarre are becoming normal, one of the most interesting effects may be to make us re-evaluate what “normal” means" (http://www.nytimes.com/2010/05/02/magazine/02self-measurement-t.html?pagewanted=7&ref=magazine, accessed 05-07-2010).

View Map + Bookmark Entry

Google Acknowledges that it Collected Wi-Fi Information Along with Cartographic and Imaging Information April 27 – June 10, 2010

"Over the weekend, there was a lot of talk about exactly what information Google Street View cars collect as they drive our streets. While we have talked about the collection of WiFi data a number of times before--and there have been stories published in the press--we thought a refresher FAQ pulling everything together in one place would be useful. This blog also addresses concerns raised by data protection authorities in Germany.

"What information are your cars collecting? 

"We collect the following information--photos, local WiFi network data and 3-D building imagery. This information enables us to build new services, and improve existing ones. Many other companies have been collecting data just like this for as long as, if not longer, than Google.

"♦Photos: so that we can build Street View, our 360 degree street level maps. Photos like these are also being taken by TeleAtlas and NavTeq for Bing maps. In addition, we use this imagery to improve the quality of our maps, for example by using shop, street and traffic signs to refine our local business listings and travel directions;

"♦WiFi network information: which we use to improve location-based services like search and maps. Organizations like the German Fraunhofer Institute and Skyhook already collect this information globally;

"♦and 3-D building imagery: we collect 3D geometry data with low power lasers (similar to those used in retail scanners) which help us improve our maps. NavTeq also collects this information in partnership with Bing. As does TeleAtlas.

"What do you mean when you talk about WiFi network information?

"WiFi networks broadcast information that identifies the network and how that network operates. That includes SSID data (i.e. the network name) and MAC address (a unique number given to a device like a WiFi router).

"Networks also send information to other computers that are using the network, called payload data, but Google does not collect or store payload data.*  

"But doesn’t this information identify people? 

"MAC addresses are a simple hardware ID assigned by the manufacturer. And SSIDs are often just the name of the router manufacturer or ISP with numbers and letters added, though some people do also personalize them. However, we do not collect any information about householders, we cannot identify an individual from the location data Google collects via its Street View cars.  

"Is it, as the German DPA states, illegal to collect WiFi network information? 

"We do not believe it is illegal--this is all publicly broadcast information which is accessible to anyone with a WiFi-enabled device. Companies like Skyhook have been collecting this data cross Europe for longer than Google, as well as organizations like the German Fraunhofer Institute.  

"Why did you not tell the DPAs that you were collecting WiFi network information?

"Given it was unrelated to Street View, that it is accessible to any WiFi-enabled device and that other companies already collect it, we did not think it was necessary. However, it’s clear with hindsight that greater transparency would have been better.  

"Why is Google collecting this data?

"The data which we collect is used to improve Google’s location based services, as well as services provided by the Google Geo Location API. For example, users of Google Maps for Mobile can turn on “My Location” to identify their approximate location based on cell towers and WiFi access points which are visible to their device. Similarly, users of sites like Twitter can use location based services to add a geo location to give greater context to their messages.  

"Can this data be used by third parties? 

"Yes--but the only data which Google discloses to third parties through our Geo Location API is a triangulated geo code, which is an approximate location of the user’s device derived from all location data known about that point. At no point does Google publicly disclose MAC addresses from its database (in contrast with some other providers in Germany and elsewhere).

"Do you publish this information?

"No" (http://googlepolicyeurope.blogspot.com/2010/04/data-collected-by-google-cars.html, accessed 05-23-2012).

On June 9, 2010 Google announced in its Official Blog that it had "mistakenly included code" in its software that collected "samples of payload data" from unencrypted WiFi networks, but not from encrypted WiFI networks.  It also announced that in response to requests from the Irish Data Protection Authority it was deleting payload data collected from Irish WiFi networks.

View Map + Bookmark Entry

Using the Twitter Archive for Historical Research April 30, 2010

The New York Times published "When History is Compiled 140 Characters at a Time" from which I quote:

“ 'Twitter is tens of millions of active users. There is no archive with tens of millions of diaries,' said Daniel J. Cohen, an associate professor of history at George Mason University and co-author of a 2006 book, 'Digital History.' What’s more, he said, 'Twitter is of the moment; it’s where people are the most honest.'  

"Last month, Twitter announced that it would donate its archive of public messages to the Library of Congress, and supply it with continuous updates.  

"Several historians said the bequest had tremendous potential. 'My initial reaction was, ‘When you look at it Tweet by Tweet, it looks like junk,’ said Amy Murrell Taylor, an associate professor of history at the State University of New York, Albany. 'But it could be really valuable if looked through collectively.' Ms. Taylor is working on a book about slave runaways during the Civil War; the project involves mountains of paper documents. 'I don’t have a search engine to sift through it,' she said.  

"The Twitter archive, which was 'born digital,' as archivists say, will be easily searchable by machine — unlike family letters and diaries gathering dust in attics.  

"As a written record, Tweets are very close to the originating thoughts. 'Most of our sources are written after the fact, mediated by memory — sometimes false memory,' Ms. Taylor said. 'And newspapers are mediated by editors. Tweets take you right into the moment in a way that no other sources do. That’s what is so exciting.'  

"Twitter messages preserve witness accounts of an extraordinary variety of events all over the planet. 'In the past, some people were able on site to write about, or sketch, as a witness to an event like the hanging of John Brown,' said William G. Thomas III, a professor of history at the University of Nebraska-Lincoln. 'But that’s a very rare, exceptional historical record.'  

"Ten billion Twitter messages take up little storage space: about five terabytes of data. (A two-terabyte hard drive can be found for less than $150.) And Twitter says the archive will be a bit smaller when it is sent to the library. Before transferring it, the company will remove the messages of users who opted to designate their account 'protected,' so that only people who obtain their explicit permission can follow them.

"A Twitter user can also elect to use a pseudonym and not share any personally identifying information. Twitter does not add identity tags that match its users to real people.  

"Each message is accompanied by some tidbits of supplemental information, like the number of followers that the author had at the time and how many users the author was following. While Mr. Cohen said it would be useful for a historian to know who the followers and the followed are, this information is not included in the Tweet itself.  

"But there’s nothing private about who follows whom among users of Twitter’s unprotected, public accounts. This information is displayed both at Twitter’s own site and in applications developed by third parties whom Twitter welcomes to tap its database.  

"Alexander Macgillivray, Twitter’s general counsel, said, 'From the beginning, Twitter has been a public and open service.' Twitter’s privacy policy states: 'Our services are primarily designed to help you share information with the world. Most of the information you provide to us is information you are asking us to make public.  

"Mr. Macgillivray added, 'That’s why, when we were revising our privacy policy, we toyed with the idea of calling it our ‘public policy.’ ' He said the company would have done so but California law required that it have a 'privacy policy' labeled as such.  

"Even though public Tweets were always intended for everyone’s eyes, the Library of Congress is skittish about stepping anywhere in the vicinity of a controversy. Martha Anderson, director of the National Digital Information Infrastructure and Preservation Program at the library, said, 'There’s concern about privacy issues in the near term and we’re sensitive to these concerns.'  

"The library will embargo messages for six months after their original transmission. If that is not enough to put privacy issues to rest, she said, 'We may have to filter certain things or wait longer to make them available.' The library plans to dole out its access to its Twitter archive only to those whom Ms. Anderson called “qualified researchers.”  

"BUT the library’ s restrictions on access will not matter. Mr. Macgillivray at Twitter said his company would be turning over copies of its public archive to Google, Yahoo and Microsoft, too. These companies already receive instantaneously the stream of current Twitter messages. When the archive of older Tweets is added to their data storehouses, they will have a complete, constantly updated, set, and users won’t encounter a six-month embargo.  

"Google already offers its users Replay, the option of restricting a keyword search only to Tweets and to particular periods. It’s quickly reached from a search results page. (Click on 'Show options,' then 'Updates,' then a particular place on the timeline.)  

"A tool like Google Replay is helpful in focusing on one topic. But it displays only 10 Tweets at a time. To browse 10 billion — let’s see, figuring six seconds for a quick scan of each screen — would require about 190 sleepless years.  

"Mr. Cohen encourages historians to find new tools and methods for mining the 'staggeringly large historical record' of Tweets. This will require a different approach, he said, one that lets go of straightforward 'anecdotal history.' " (http://www.nytimes.com/2010/05/02/business/02digi.html?scp=1&sq=twitter%20+%20history&st=cse, accessed 05-06-2010).

View Map + Bookmark Entry

General Statistics on the U.S. Book Publishing Industry May 6, 2010

"The US book publishing industry consists of about 2,600 companies with combined annual revenue of about $27 billion. Major companies include John Wiley & Sons, McGraw-Hill, Pearson, and Scholastic, as well as publishing units of large media companies such as HarperCollins (owned by News Corp); Random House (owned by Bertelsmann); and Simon & Schuster (owned by CBS). The industry is highly concentrated: the top 50 companies generate about 80 percent of revenue.

"Demand for books is driven by demographics and is largely resistant to economic cycles. The profitability of individual companies depends on product development and marketing. Large publishers have an advantage in bidding for new manuscripts or authors. Small and midsized publishers can succeed if they focus on a specific subject or market.

"Publishers produce books for general reading (adult "trade" books); text, professional, technical, children's, and reference books. Trade books account for 25 percent of the market, textbooks 25 percent, and professional books 20 percent.  "

"About 150,000 new books are published in the US every year; however, most are low-volume products. The number of books produced by major trade publishers and university presses is closer to 40,000" (http://www.businesswire.com/portal/site/home/permalink/?ndmViewId=news_view&newsId=20100506006043&newsLang=en, accessed 05-06-2010).

View Map + Bookmark Entry

Google Introduces a Translation Feature for Google Goggles May 6, 2010

On May 6, 2010 Google announced a translation feature for Google Goggles, image recognition and search feature available on Android-based mobile devices.

"Here’s how it works:

"Point your phone at a word or phrase. Use the region of interest button to draw a box around specific words Press the shutter button

"If Goggles recognizes the text, it will give you the option to translate

"Press the translate button to select the source and destination languages."

"Today Goggles can read English, French, Italian, German and Spanish and can translate to many more languages. We are hard at work extending our recognition capabilities to other Latin-based languages. Our goal is to eventually read non-Latin languages (such as Chinese, Hindi and Arabic) as well."

View Map + Bookmark Entry

The First Internet Addresses in Non-Latin Characters May 6, 2010

On May 6, 2010 three Mideast countries became the first to get Internet addresses entirely in non-Latin characters. Domain names in Arabic for Egypt, Saudi Arabia and the United Arab Emirates were added to the Internet's master directories following final approval by the Internet Corporation for Assigned Names and Numbers, or ICANN. This was the first major change to the Internet domain name system since its creation in the 1980s.


View Map + Bookmark Entry

The Most Successful Art Forger Ever May 12 – August 22, 2010

From May 12-August 22, 2010 Museum Boijmans Van Beuningen in Rotterdam, Holland, presented ‘Van Meegeren’s Fake Vermeers’— an exhibition of the famous forgeries of Han van Meegeren.

"Van Meegeren craftily exploited art historians’ desire to discover early works by Johannes Vermeer. During a famous court case in which Van Meegeren was accused of Nazi collaboration, he admitted that he had forged old master paintings, including several Vermeers. Museum Boijmans Van Beuningen had acquired one of the fake Vermeers from Van Meegeren. The exhibition explores Van Meegeren’s technique, his masterpieces and his downfall. 

"The exhibition ‘Van Meegeren’s Fake Vermeers’ includes approximately ten forgeries by Han van Meegeren (1889-1947). Most are in the style of Johannes Vermeer, but the works also include forgeries of Frans Hals, Pieter de Hooch and Gerard ter Borch. Van Meegeren’s life as a forger is further illuminated through a documentary film and objects from his studio. A masterpiece In 1937 the director of Museum Boymans, Dirk Hannema, purchased ‘The Supper at Emmaus’ for 540,000 guilders. There was great interest in the painting, which most experts believed to be an early masterpiece by Vermeer. The Rijksmuseum in Amsterdam even offered Vermeer’s ‘The Love Letter’ in exchange for the painting, but Hannema rejected the offer. Museum Boymans exhibited the work as one of the highlights of its collection and art experts praised the work’s high quality. 


"At the end of the Second World War a painting from the Netherlands was found in the collection of the Nazi minister, Hermann Göring. The painting was traced back to Han van Meegeren, who was immediately arrested on suspicion of collaboration. Van Meegeren admitted to having sold the work, but also claimed to have made the painting himself. He had sold Göring a forgery. Van Meegeren’s confession became worldwide news and he was hailed as a hero as ‘the man who swindled Göring’. Meanwhile the art world was thrown into disarray. Van Meegeren demonstrated his forgery techniques to an expert panel and during his trial his forgeries were hung in the courtroom, as can be seen in the documentary film included in the exhibition.


"Van Meegeren’s technique remains exceptional. For his masterpiece ‘The Supper at Emmaus’, Van Meegeren used a genuine seventeenth-century canvas and historical pigments. He bound the pigments with bakelite, which hardened when heated to produce a surface very similar to that of a seventeenth-century painting. This technique, combined with Van Meegeren’s choice of subject matter and composition, was an important factor in convincing so many people of the authenticity of his works. Van Meegeren created the missing link between Vermeer’s early and late works. The exhibition at Museum Boijmans Van Beuningen sheds new light on Van Meegeren’s technique, resulting from new technical research undertaken by the Rijksmuseum" (http://www.artdaily.org/index.asp?int_sec=2∫_new=38022, accessed 05-14-2010).

View Map + Bookmark Entry

Cell Phones Are Now Used More for Data than Speech May 13, 2010

According to The New York Times, in May 2010 people were using their cell phones more for text messaging and data-processing than for speech. This should not come as a surprise to anyone with teen-age children.

". . . although almost 90 percent of households in the United States now have a cellphone, the growth in voice minutes used by consumers has stagnated, according to government and industry data.  

"This is true even though more households each year are disconnecting their landlines in favor of cellphones.  

"Instead of talking on their cellphones, people are making use of all the extras that iPhones, BlackBerrys and other smartphones were also designed to do — browse the Web, listen to music, watch television, play games and send e-mail and text messages.  

"The number of text messages sent per user increased by nearly 50 percent nationwide last year, according to the CTIA, the wireless industry association. And for the first time in the United States, the amount of data in text, e-mail messages, streaming video, music and other services on mobile devices in 2009 surpassed the amount of voice data in cellphone calls, industry executives and analysts say. 'Originally, talking was the only cellphone application,' said Dan Hesse, chief executive of Sprint Nextel. 'But now it’s less than half of the traffic on mobile networks.'  

"Of course, talking on the cellphone isn’t disappearing entirely. 'Anytime something is sensitive or is something I don’t want to be forwarded, I pick up the phone rather than put it into a tweet or a text,' said Kristen Kulinowski, a 41-year-old chemistry teacher in Houston. And calling is cheaper than ever because of fierce competition among rival wireless networks.  

"But figures from the CTIA show that over the last two years, the average number of voice minutes per user in the United States has fallen (http://www.nytimes.com/2010/05/14/technology/personaltech/14talk.html?hp, accessed 05-14-2010).

View Map + Bookmark Entry

Social Networking Added to Reading Electronic Books June 2010

The "popular highlights" feature of the Amazon Kindle ebook reader available in June 2010 enabled readers to see which portions of books other readers considered noteworthy. It also suggested that Amazon may be collecting this information as possible marketing information for publishers. This feature could  be disabled by Kindle users.

View Map + Bookmark Entry

The First Malware to Spy on and Subvert Industrial Systems June 2010

In June 2010 the Stuxnet computer worm, the first malware that spied on and subverted industrial systems, was discovered.  Stuxnet was also the first malware to include a programmable logic controller (PLC) rootkit

"The worm initially spreads indiscriminately, but includes a highly specialized malware payload that is designed to target only Siemens supervisory control and data acquisition (SCADA) systems that are configured to control and monitor specific industrial processes. Stuxnet infects PLCs by subverting the Step-7 software application that is used to reprogram these devices.

"Different variants of Stuxnet targeted five Iranian organizations, with the probable target widely suspected to be uranium enrichment infrastructure in Iran; Symantec noted in August 2010 that 60% of the infected computers worldwide were in Iran. Siemens stated on 29 November that the worm has not caused any damage to its customers, but the Iran nuclear program, which uses embargoed Siemens equipment procured secretly, has been damaged by Stuxnet. Kaspersky Lab concluded that the sophisticated attack could only have been conducted "with nation-state support". This was further supported by the F-Secure's chief researcher Mikko Hyppönen who commented in a Stuxnet FAQ, 'That's what it would look like, yes'. It has been speculated that Israel and the United States may have been involved. . . .

"Experts believe that Stuxnet required the largest and costliest development effort in malware history. Its many capabilities would have required a team of people to program, in-depth knowledge of industrial processes, and an interest in attacking industrial infrastructure. Eric Byres, who has years of experience maintaining and troubleshooting Siemens systems, told Wired that writing the code would have taken many man-months, if not years. Symantec estimates that the group developing Stuxnet would have consisted of anywhere from five to thirty people, and would have taken six months to prepare. The Guardian, the BBC and The New York Times all claimed that (unnamed) experts studying Stuxnet believe the complexity of the code indicates that only a nation-state would have the capabilities to produce it. The self-destruct and other safeguards within the code imply that a Western government was responsible, with lawyers evaluating the worm's ramifications. Software security expert Bruce Schneier condemned the 2010 news coverage of Stuxnet as hype, however, stating that it was almost entirely based on speculation. But after subsequent research, Schneier stated in 2012 that 'we can now conclusively link Stuxnet to the centrifuge structure at the Natanz nuclear enrichment lab in Iran' " (Wikipedia article on Stuxnet, accessed 05-30-2012).

View Map + Bookmark Entry

Social Networks Take a Machiavellian Approach to Privacy June 2, 2010

On June 2, 2010 privacy experts Chris Hoofnagle and Michael Zimmer published "How to Win Friends and Manipulate People" in The Huffington Post:

"Information-intensive companies such as Facebook follow a Machiavellian public relations strategy when introducing new programs. Without warning, these companies introduce 'features' that invariably result in more information being shared with advertisers, wait for a negative reaction, and then announce minimal changes without affecting the new feature. They explain away the fuss with public relations spin: 'we are listening to our users,' 'we didn't get it right this time,' 'we look forward to your feedback,' etc. This strategy works, time and time again.

"Facebook's recent troubles illustrate this neatly. Reacting to rising criticism of recent changes that affected hundreds of millions of users -- by forcing certain profile information to be permanently public and automatically enrolling all users in a new 'instant personalization' service that shares profile information with external websites -- Facebook CEO Mark Zuckerberg acknowledged in a Washington Post op-ed that his company had 'missed the mark in providing users the ability to control their privacy on the popular site. Noting, 'Whenever we make a change, we try to apply the lessons we've learned along the way,' Zuckerberg promised that simpler privacy controls would be forthcoming. Tweaks to Facebook's privacy settings were announced, but instant personalization remained on by default.

"The most recent set of Facebook snafus are the direct descendants of prior decisions taken by the social networking giant. In 2006, Facebook activated News Feed, where users' actions were automatically posted on friends' pages, causing many to object because it made it too easy for other people to track down individual activities. A year later, Facebook launched Beacon, an advertising program that announced users' purchases at other websites on Facebook, often without explicit consent.

"In all these cases, Facebook follows the pattern of taking two steps forward with an aggressive misuse of personal information and creeping back the slightest bit once the criticisms emerged. Each time, Facebook promised users that "'we will keep listening,' and artfully reminding us that all they really want to do is make "the world more open and connected."

"These events represent the perfection of privacy public relations. Guided by earlier battles fought by tobacco and drug companies, information-intensive firms have learned how to use rhetoric to distract the public while successfully implementing new programs. They are the Machiavellis of privacy.

"Privacy PR results in 'blowforward.' Typically, entities that behave transgressively experience blowback: they lose market share or some power they once had. But platforms such as Google and Facebook are so compelling that users will not defect, even as the companies change settings. Facebook installed a window onto users' profiles and replaced it with a one-way mirror in response to the controversy. The situation leaves the user responsible for perceiving the observation room behind the mirror and to shutter it.

"Some glibly ascribe this debate to 'young people not caring about privacy.' But this point is inaccurate and confuses the issue. Users have always been able to make profiles more public. We are describing a situation where the service provider itself makes the changes, thus pushing them towards greater public exposure.

"Further, both qualitative and quantitative research shows that Americans of all ages care about privacy. Interestingly, the youngest users of social networking services are the least trustful of them and most likely to take privacy-preserving steps, according to a new report by the Pew Internet & American Life Project.

"This distrust relates to the inherent motivations of social networking services. Relying on a business model that depends on unfettered, open access to as much personal information as possible -- all in the interest of serving advertising -- social networking services design their systems to maximize sharing. Any privacy settings provided tend to be minimal, hidden, and difficult to use. Too often, new features don't have any meaningful privacy controls until users protest, and their reactions are "listened to."

"Some data-intensive firms are seeking a broad cultural change that places all personal information out in the open and in the hands of companies for whatever uses they see fit. Many of us might welcome such a change, but if we fail to recognize the manipulation used to bring about this change, we will all be included in Facebook's utopia."

View Map + Bookmark Entry

Flipboard, "Your Personalized, Social Magazine" July 2010

In July 2010 the social-network aggregation, magazine-format application software Flipboard was launched for the iPad as "your personalized, social magazine" by Mike McCue and Evan Doll in Palo Alto, California.

The magazine collected the recommendations of user's friends on social network sites, and other websites and presented them in magazine format on the iPad. The application was designed specifically for the iPad's large touch screen and allowed users to "flip" through their social networking feeds and feeds from websites that partnered with Flipboard. The product was later ported to the iPhone.

View Map + Bookmark Entry

Spam Declines from 90% of Email Traffic to Only 72.9% July 2010 – June 2011

"The high water mark for spam was reached in July 2010 when approximately 230 billion spam messages were in circulation each day, accounting for 90% of all email traffic. This has now declined to 39.2 billion messages per day, accounting for only 72.9% of all email. The question is why?

"There are many different factors that appear to be working together to make sending spam more difficult and less profitable for criminal gangs. In September 2010 the Spamit web site announced that it was ceasing operation due to “numerous negative events”. Spamit provided affiliate marketing services, allegedly helping to pay spammers for promoting many spam advertised web sites, notably the “Canadian Pharmacy” operation which was one of the most spam advertised brands.  

"The demise of Spamit corresponded with a large drop in spam volumes, from approximately 100 to 75 billion spam per day from the end of September to mid November 2010. It is not known exactly what the “negative events” are referred to by Spamit, but it is thought that these may be associated with increased attention by regulatory bodies and law enforcement in the activities of the group.  

"Nevertheless, spam had been dropping before this event. It may be that increased surveillance of spammers by authorities had pursuaded spammers to seek other economic activities legitimate or illicit. Or it may be that the peak of spamming in July 2010 was unsustainable for the spamming industry, there just weren't the number of customers to warrant such a high level of activity.

"A few months later, in December 2010, the largest botnet at the time, Rustock suddenly stopped sending spam. At the time, this single botnet was responsible for 47.5% of all spam, sending approximately 44.1 billion spams per day. The botnet soon resumed its activity in January in 2011, but in March it ceased operation entirely and was dismantled due to concerted action by a partnership of industry and law enforcement. Since then, the other botnets have not significantly increased their spamming activity to maintain the same total levels of spam. Indeed, one of the largest botnets, Bagle, has decreased the amount of spam that it sends from 8.31 billion spam per day in March 2011 to 1.60 billion spam per day in June 2011.

"This decrease in spamming activity may be evidence that increased investigation of the spam underworld has both disrupted the affiliate networks, such as Spamit, that pay for spam campaigns, and led to botnet controllers looking to keep their heads down to avoid the attention of the legal authorities. Interestingly, during the same period there has been a reported rise in distributed denial of service attacks, which can also be undertaken by botnets. It may be that the botnet owners are looking to other modes of operation to maintain their revenue, while moving away from the now less profitable and more risky business of spamming" (http://www.symantec.com/connect/blogs/why-my-email-went, accessed 07-04-2011).

View Map + Bookmark Entry

"The First Image of the Entire Universe" July 5, 2010

From roughly 1,000,000 miles into space, on July 5, 2010 the European Space Agency's Planck space observatory took the first photograph of the entire universe.

View Map + Bookmark Entry

Stanford's New Engineering Library Houses Few Physical Books July 8, 2010

"The periodical shelves at Stanford University’s Engineering Library are nearly bare. Library chief Helen Josephine says that in the past five years, most engineering periodicals have been moved online, making their print versions pretty obsolete -- and books aren't doing much better.  

"In 2005, when the university realized it was running out space for its growing collection of 80,000 engineering books, administrators decided to build a new library. But instead of creating more space for books, they chose to create less.  

"The new library is set to open in August with 10,000 engineering books on the shelves -- a decrease of more than 85 percent from the old library. Stanford library director Michael Keller says the librarians determined which books to keep on the shelf by looking at how frequently a book was checked out. They found that the vast majority of the collection hadn't been taken off the shelf in five years.  

"Keller expects that, eventually, there won't be any books on the shelves at all.  'As the world turns more and more, the items that appeared in physical form in previous decades and centuries are appearing in digital form,' he says.  

"Given the nature of engineering, that actually comes in handy. Engineering uses some basic formulas but is generally a rapidly changing field -- particularly in specialties such as software and bioengineering. Traditional textbooks have rarely been able to keep up.  

"Jim Plummer, dean of Stanford's School of Engineering, says that's why his faculty is increasingly using e-books.  

" 'It allows our faculty to change examples,' he says, 'to put in new homework problems ... and lectures and things like that in almost a real-time way.'

For the moment, the Engineering Library is the only Stanford library that's cutting back on books. But Keller says he can see what's coming down the road by simply looking at the current crop of Stanford students.  

" 'They write their papers online, and they read articles online, and many, many, many of them read chapters and books online,' he says. 'I can see in this population of students behaviors that clearly indicate where this is all going.'

"And while it's still rare among American libraries to get rid of such a large amount of books, it's clear that many are starting to lay the groundwork for a different future. According to a survey by the Association of Research Libraries, American libraries are spending more of their money on electronic resources and less on books" (http://news.opb.org/article/8204-stanford_ushers_in_the_age_of_bookless_libraries/, accessed 07-10-2010).

View Map + Bookmark Entry

For the First Time E-books Outsell Digital Books on Amazon.com July 19, 2010

During the months of April, May, and June 2010 sales of ebooks (e-books) exceeded sales of hardcover physical books at Amazon.com. "In that time Amazon said, it sold 143 Kindle books for every 100 hardcover books, including hardcovers for which there is no Kindle edition."

The New York Times online, which reported this information, did not compare Amazon's sales of e-books versus their sales of paperback books during the same period, but indicated that  "paperback sales are thought to still outnumber e-books."

"Book lovers mourning the demise of hardcover books with their heft and their musty smell need a reality check, said Mike Shatzkin, founder and chief executive of the Idea Logical Company, which advises book publishers on digital change. 'This was a day that was going to come, a day that had to come,' he said. He predicts that within a decade, fewer than 25 percent of all books sold will be print versions.  

"Still, the hardcover book is far from extinct. Industrywide sales are up 22 percent this year, according to the American Publishers Association."

The shift at Amazon is "astonishing when you consider that we’ve been selling hardcover books for 15 years, and Kindle books for 33 months," Amazon's chief executive, Jeffrey P. Bezos, said in a news release, published in Amazon.com's Media Room.

View Map + Bookmark Entry

The First Traditional Humanities Journal to Try "Open" Peer Review July 26, 2010

For its special issue, "Shakespeare and the New Media," the scholarly humanities journal Shakespeare Quarterly published by the Folger Shakespeare Library, Washington, D.C., offered contributors the chance to take part in a partially open peer-review process conducted by MediaCommonspress.  

"Authors could opt to post drafts of their articles online, open them up for anyone to comment on, and then revise accordingly. The editors would make the final call about what to publish (hence the "partially open" label). As far as the editors know, it's the first time a traditional humanities journal has tried out a version of crowd-sourcing in lieu of double-blind review" (http://chronicle.com/article/Leading-Humanities-Journal/123696/, accessed 08-24-2010).

View Map + Bookmark Entry

Wikileaks Installs an "Insurance File" July 29, 2010

"On 29 July 2010 WikiLeaks added a 1.4 GB "Insurance File" to the Afghan War Diary page. The file is AES encrypted and has been speculated to serve as insurance in case the WikiLeaks website or its spokesman Julian Assange are incapacitated, upon which the passphrase could be published, similar to the concept of a dead man's switch. Following the first few days' release of the United States diplomatic cables starting 28 November 2010, the US television broadcaster CBS predicted that 'If anything happens to Assange or the website, a key will go out to unlock the files. There would then be no way to stop the information from spreading like wildfire because so many people already have copies.' CBS correspondent Declan McCullagh stated, 'What most folks are speculating is that the insurance file contains unreleased information that would be especially embarrassing to the US government if it were released' "(Wikipedia article on Wikileaks, accessed 12-08-2010).

View Map + Bookmark Entry

Data on Mobile Networks is Doubling Each Year August 1, 2010

"The volume of data on the world’s mobile networks is doubling each year, according to Cisco Systems, the U.S. maker of routers and networking equipment. By 2014, it estimates, the monthly data flow will increase about sixteenfold, to 3.6 billion gigabytes from 220.1 million" (http://www.nytimes.com/2010/08/02/technology/02iht-NETPIPE02.html?src=un&feedurl=http://json8.nytimes.com/pages/business/global/index.jsonp, accessed 08-01-2010)

View Map + Bookmark Entry

"Every Two Days We Create as Much Information as We Did up to 2003" August 4, 2010

"Today at the Techonomy conference in Lake Tahoe, CA, the first panel featured Google CEO Eric Schmidt. As moderator David Kirkpatrick was introducing him, he rattled off a massive stat.

"Every two days now we create as much information as we did from the dawn of civilization up until 2003, according to Schmidt. That’s something like five exabytes of data, he says.  

Let me repeat that: we create as much information in two days now as we did from the dawn of man through 2003.  

“ 'The real issue is user-generated content,' Schmidt said. He noted that pictures, instant messages, and tweets all add to this.  

"Naturally, all of this information helps Google. But he cautioned that just because companies like his can do all sorts of things with this information, the more pressing question now is if they should. Schmidt noted that while technology is neutral, he doesn’t believe people are ready for what’s coming.  

“ 'I spend most of my time assuming the world is not ready for the technology revolution that will be happening to them soon,' Schmidt said" (http://techcrunch.com/2010/08/04/schmidt-data/, accessed 12-19-2012).
View Map + Bookmark Entry

Google Calculates That There are "129,864,880" Different Books in the World August 5, 2010

Using an algorithm that combined book information from multiple sources including libraries, WorldCat (OCLC) national union catalogs and commercial providers, Google estimated that there were "129,864,880" different books in the world. This number was, of course, constantly increasing. 

Google's definition was inexact for various reasons including the detail that they "count hardcover and paperback books produced from the same text twice, but treat several pamphlets bound together by a library as a single book."

This information comes from Google's Inside Google Books blog, August 05, 2010.  That provided other interesting tidbits such as:

"We still have to exclude non-books such as microforms (8 million), audio recordings (4.5 million), videos (2 million), maps (another 2 million), t-shirts with ISBNs (about one thousand), turkey probes (1, added to a library catalog as an April Fools joke), and other items for which we receive catalog entries."

"Our handling of serials is still imperfect. Serials cataloging practices vary widely across institutions. The volume descriptions are free-form and are often entered as an afterthought. For example, “volume 325, number 6”, “no. 325 sec. 6”, and “V325NO6” all describe the same bound volume. The same can be said for the vast holdings of the government documents in US libraries. At the moment we estimate that we know of 16 million bound serial and government document volumes. This number is likely to rise as our disambiguating algorithms become smarter.

"After we exclude serials, we can finally count all the books in the world. There are 129,864,880 of them. At least until Sunday."

View Map + Bookmark Entry

The 2010 Social Networking "World Map" August 5, 2010

Ethan Bloch, founder of Flowtown.com, created the 2010 Social Networking Map.

This was intended as a tribute to XKCD’s ‘Map of Online Communities’ published in 2007. The differences between the two maps, reflective of extremely rapid changing in the social network world, were dramatic!

View Map + Bookmark Entry

Google Introduces "Google Instant" September 8, 2010

At a Google press event at the San Francisco Museum of Modern Art Google founder Sergey Brin and Google vice president for search products and user experience Marissa Mayer introduced Google Instant 

"which predicts Internet search queries and shows results as soon as someone begins to type, adjusting the results as each successive letter is typed."

"Google, which already handles more than a billion searches a day and has a billion users a week, had to figure out how to manage the load when suddenly each letter typed was a separate search query. The solution includes storing frequent searches and sending common ones, like 'Barack,' back more quickly than ones that are nearly impossible to predict, like 'Bill.' " (http://www.nytimes.com/2010/09/09/technology/techspecial/09google.html?hpw, accessed 09-09-210).

View Map + Bookmark Entry

eBook Edition Released Prior to Hardcover Edition September 8, 2010

HarperCollins publishers released the ebook edition of Deepak Chopra's fictionalized biography of the Prophet Muhammed entititled Muhammad prior to the hardcover release on September 21.

This was the first time that HarperCollins released an ebook edition prior to a print edition.

View Map + Bookmark Entry

The First Recording of Ancient Asian Melodies September 15, 2010

The Tradional Crossroads label issued Immeasurable Light, the first recording of ancient (8th-12th centuries) Asian melodies "transnotated" from rare musical manuscripts.  

The recording was a partnership between Wu Man, a renowned virtuoso of the pipa, a 4-string lute-like Asian instrument for which the ancient melodies were written, the Kronos Quartet, and Rembrandt F. Wolpert, a professor of ethnomusicology at the Center for the Study of Early Asian and Middle Eastern Musics at Fulbright College, University of Arkansas.

"The ancient manuscripts, written in Japanese and Chinese calligraphy, show only the tablature or finger positions for how to play the instrument, but do not include the actual pitches.  

"Professor Wolpert (who took the Chinese name 'Wu Ren Fan'—meaning a sailboat free from tasks) gave himself the job of musical archeologist. Joining 'translate' and 'notate,' he coined the term 'transnotate' to describe a unique process of translating these ancient Chinese and Japanese manuscripts written in characters into Western musical notation. He used a computer program and a musical grammatical system that he designed himself.

"Pipa virtuoso Wu Man then played the translated music and the results for the listener are to hear, as nearly as possible, those ancient melodies. The performance was further rounded out by the Kronos Quartet, Wu’s long-time collaborating team" (http://www.theepochtimes.com/n2/content/view/46618/, accessed 11-28-2010).

View Map + Bookmark Entry

Possibly the First Academic Library with No Physical Books September 19, 2010

On September 19, 2010 Inside Higher Ed.com reported that the University of Texas at San Antonio's Applied Engineering and Technology Library contained no physical books.

"The idea of a libraries with no bound books has been a recurring theme in conversations about the future of academe for a long time, and it has become common practice for academic libraries to store rarely used volumes in off-campus facilities. But there are few, if any, examples of libraries that actually have zero bound books in them.

"Some libraries, such as the main one at the University of California at Merced, and the engineering library at Stanford University, have drastically reduced the number of print volumes they keep in the actual library building, choosing to focus on beefing up their electronic resources. In fact, some overenthusiastic headline writers at one point dubbed Stanford’s library 'bookless.' But that is 'a vision statement, not a point of fact,' says Andrew Herkovic, the director of communications for Stanford’s libraries.  

"San Antonio says it now has the first actual bookless library. Students who stretch out in the library’s ample study spaces — which dominate the floor plan of the new building — and log on to its resource network using their laptops or the library’s 10 public computers will be able to access 425,000 e-books and 18,000 electronic journal articles. Librarians will have offices there and will be available for consultations" (http://www.insidehighered.com/news/2010/09/17/libraries, accessed 09-19-2010).

View Map + Bookmark Entry

Filed under: Book History, Libraries

NCBI Introduces Images, a Database of More than 2.5 Million Images in Biomedical Literature October 2010

In October 2010 the National Center for Biotechnology Information (NCBI), a division of the National Library of Medicine (NLM) at the National Institutes of Health (NIH), introduced Images, an online database of more than 2.5 million images and figures from medical and life sciences journals. 

View Map + Bookmark Entry

Instagram is Founded October 2010 – December 17, 2012

In October 2010 Kevin Systrom and Cheyenne Foster launched Instagram, an online photo-sharing and social networking service that enabled users to take a picture, apply a digital filter to it, and share on a variety of networking services, including its own. Instagram was purchased in April 2012 by Facebook for approximately $1 billion in cash and stock.  After regulatory approval the deal closed in September 2012 by which time Instagram had over 100 million users. 

"On December 17, 2012, Instagram updated its Terms of Service to allow Instagram the right to sell users' photos to third parties without notification or compensation after January 16, 2013. The criticism from privacy advocates, consumers and even National Geographic which suspended its Instagram account, prompted Instagram to issue a statement retracting the controversial terms. Instagram is currently working on developing new language to replace the disputed terms of use" (Wikipedia article on Instagram, accessed 12-22-2012).

View Map + Bookmark Entry

"The Social Network": The Origins of Facebook October 1, 2010

In October 2010 he drama film, The Social Network, based on the book, The Accidental Billionaires: The Founding of Facebook, a Tale of Sex, Money, Genius, and Betrayal, by Ben Mezrich, was released by Columbia Pictures.

The book was adapted for the screen by Aaron Sorkin and directed by David Fincher. Jesse Eisenberg portrayed the founder of Facebook, Mark Zuckerberg, to considerable critical acclaim.

Zuckerberg has been widely acknowledged as a programming prodigy. The film portrays him not only in that way, but as so focused on programming, and so insensitive to other people's feelings as to be almost autistic. One can hardly imagine that anyone as overly focused on programming as Zuckerberg is portrayed in the film could have understood the social nuances sufficiently to build it into the world's top social media site. The Wikipedia article on Zuckerberg indicates that he is more well-rounded than characterized in the film, having a strong background in classics and fond of quoting from Greek and Latin literature, especially epic poetry. 

♦ On January 30, 2010 Jesse Eisenberg and Mark Zuckerberg briefly appeared together on Saturday Night Live: 

View Map + Bookmark Entry

Google Books Scanned More than 15 Million Books in 6 Years and More than 30 Million in 9 Years October 14, 2010 – April 2013

In a blog entitled "On the Future of Books" on October 14, 2010 James Crawford, engineering director of Google Books, reported that Google Books had scanned, and made searchable, more than 15 million books from more than 100 countries in over 400 languages as part of the Google Books project initiated in 2004.

By April 2013 Google Books had scanned "more than 30 million books," doubling the size of the project in less than four years.

View Map + Bookmark Entry

The First Fragment of Contemporary Classical Music Composed by a Computer in its Own Style October 15, 2010

On October 15, 2010 the Iamus computer cluster developed by Francisco Vico and associates at the Universidad de Málaga, using the Melomics system, composed Opus One. This composition was arguably the first fragment of professional contemporary classical music ever composed by a computer in its own style, rather than emulating the style of existing composers.

"Melomics (derived from the genomics of melodies) is a propietary computational system for the automatic composition of music (with no human intervention), based on bioinspired methods and commercialized by Melomics Media" (Wikipedia article on Melomics, accessed 11-13-2013).

View Map + Bookmark Entry

Columbia University Opens the Tow Center for Digital Journalism October 19, 2010

On October 19, 2010 the Tow Center for Digital Journalism offically opened at Columbia Journalism School, reflecting the development of the most significant new journalistic media since television, and resultant changes in the news industry. The first director of the Tow Center was Emily Bell, who had previously led the development of digital content at TheGuardian.com.

Among its features, the Tow Center also helps oversee the dual-degree Master of Science Program in Computer Science and Journalism offered in conjunction with Columbia’s Fu Foundation School of Engineering and Applied Science.These students receive highly specialized training in the digital environment, enabling them to develop technical and editorial skills in all aspects of computer-supported news gathering and digital media production.

View Map + Bookmark Entry

Paperbecause.com Makes the Case for Using Paper October 27, 2010

In response to the growth of digital information and the widely-felt desire to conserve natural resources, a website, paperbecause.com, advertised the practical value and ecological properties of paper:

"All over the world, people use paper every day. From eco-friendly food packaging to recyclable newspapers and magazines, to office paper, printing paper and tissue paper, most people can’t get through the day without it. Paper makes our world better. And when we make the right paper choices, we get the chance to return the favor.

"So, why is it that so many people seem to have turned on paper? Through misleading environmental claims like deforestation, excessive energy consumption and crowded landfill sites, it’s been the source of recent bad publicity. However, with a little more information, it soon becomes clear that paper isn’t the cause of environmental destruction. In fact, it just may offer a solution. So we decided to clear up the confusion and turn a page in the way people see paper. Below are a few key reasons why paper is good — and why the right paper is even better.  

"For starters, making paper doesn’t destroy forests. In fact, the forest products industry plants more than 1.7 million trees per day. When you think about it, it just makes sense. After all, if we don’t ensure a steady supply of raw materials, how can we continue to provide the products that so many people rely on to communicate and store information each and every day? That’s why, for every tree we harvest, several more are planted or naturally regenerated in its place. And it’s not just about sustaining paper. It’s also about sustaining forest life. Domtar harvests trees from forests that are certified by the Forest Stewardship Council™ (FSC®) and the Sustainable Forestry Initiative® (SFI), ensuring environmental responsibility throughout the life cycle of our products. Domtar EarthChoice® papers are also supported by the Rainforest Alliance and World Wildlife Fund (WWF) Canada, and we’re proud to play a part in ensuring our forests — and the wildlife within them — are well taken care of, for years to come.  

"Paper is portable, secure, consistent and permanent. It’s 100% recyclable. And the people who make it have made great strides in reducing overall energy consumption and protecting natural forests. Maybe that’s why there are nearly 750 million acres of forests in the U.S. — about the same as 100 years ago. Additionally, annual net growth of U.S. forests is 36 percent higher than the volume of annual tree removals, and total forest cover in the U.S. and Canada has basically remained the same from 1990 to 2005.1 By planting new seedlings, we help rid the atmosphere of carbon dioxide, and replace it with fresh oxygen. As young trees grow, they absorb CO2 from the atmosphere. And as a wood-based product, paper continues to store carbon throughout its lifetime. Planting new trees can also combat global warming. For every ton of wood a forest produces, it removes 1.47 tons of CO2 from the air and replaces it with 1.07 ton of oxygen.2

"Like most industrial conversion processes, making paper does consume a lot of energy. However, Domtar and many other pulp and paper companies have made a serious commitment to reduced energy consumption and energy efficiency. In 2009, Domtar used an average of 78% renewable energy at its mill operations. Burning fossil fuels, such as natural gas, oil and coal is a major source of greenhouse gas (GHG) emissions, but the pulp and paper industry largely uses renewable energy sources that are considered carbon neutral to generate steam and electricity. By making paper using more renewable energy and increasing energy efficiency, Domtar mills continue to reduce their carbon footprint.  

"Paper has often been accused of taking up excessive landfill space. However, thanks to the success of neighborhood curbside recycling programs, increased community awareness and individual activism, recycling rates are now at an all-time high. In 2009, over 63 percent of the paper consumed in the U.S. was recovered for recycling.3 To put it in perspective, the recovery rate for metal is 36 percent; glass is 22 percent; and plastic is only 7 percent" (http://www.paperbecause.com/Paper-is-Sustainable/Paper-is-Not-Bad, accessed 10-27-2010).

View Map + Bookmark Entry

3G Wireless Telephony in Mt. Everest Region of Nepal October 29, 2010

In October 2010 wireless provider NcellKathmandu, Nepal, launched 3G wireless telephony services in the Mount Everest area.

"This, of course, is not just so adventure seekers can live-tweet their ascent to the top of Everest. The 3G roll out will also provide residents in the area with much-needed access to advanced telecom services. By the end of 2011, Ncell will provide mobile coverage to more than 90 percent of the people in Nepal, according to TeliaSonera, which owns Ncell.

"The company's 3G base station is located at an altitude of about 17,000 feet and is the highest in the world, TeliaSonera said. It will enable locals, climbers, and trekkers to surf the Web, send video clips and e-mails, and make calls at rates cheaper than satellite phones" (http://www.pcmag.com/article2/0,2817,2371750,00.asp)

View Map + Bookmark Entry

8,900,000 Robots are Operating World Wide November 2010

In November 2010 there were approximately 8,900,000 robots operating in the world, according to Bloomberg Businessweek.

View Map + Bookmark Entry

Filed under: Robotics / Automata

The First MRI Video of Childbirth November 2010 – June 2012

In November 2010 the first video of a woman giving birth in an open MRI machine was taken at the Charité Hospital in Berlin, Germany.  The team led by Christian Bamberg, M.D. first published the results as "Human birth observed in real-time open magnetic resonance imaging," in the American Journal of Obstetrics & Gynecology in January 2012.  Supplementary material, including the video of the final 45 minutes of labor, was published  as Vol. 206, issue, pp. 505.e1-505e6, June 2012.

View Map + Bookmark Entry

Kinect for Xbox is Introduced November 4, 2010

On November 4, 2010 Microsoft introduced Kinect, a natural user interface providing full-body 3D motion capture, facial recognition, and voice recognition, for the Xbox 360 video game platform. The device featured an "RGB camera, depth sensor and multi-array microphone running proprietary software."  It enabled users to control and interact with the Xbox 360 without the need to touch a game controller.

"The system tracks 48 parts of your body in three-dimensional space. It doesn’t just know where your hand is, like the Wii. No, the Kinect tracks the motion of your head, hands, torso, waist, knees, feet and so on" (http://www.nytimes.com/2010/11/04/technology/personaltech/04pogue.html?scp=1&sq=kinect&st=cse, accessed 11-04-2010).

View Map + Bookmark Entry

Towards a New Digital Legal Information Environment November 9, 2010

On November 9, 2010 John G. Palfrey, Henry N. Ess III Professor of Law, Vice Dean, Library and Information Resources, Faculty Co-Director, Berkman Center for Internet and Society at Harvard Law School, proposed a new digital legal information environment for the future.

In a lecture summary published in his blog Palfrey wrote: 

"I propose a path toward a new legal information environment that is predominantly digital in nature. This new era grows out of a long history of growth and change in the publishing of legal information over more than nine hundred years, from the early manuscripts at the roots of English common law in the reign of the Angevin King Henry II; through the early printed treatises of Littleton and Coke in the fifteenth, sixteenth, and seventeenth centuries, (including those in the extraordinary collection of Henry N. Ess III); to the systemic improvements introduced by Blackstone in the late eighteenth century; to the modern period, ushered in by Langdell and West at the end of the nineteenth century. Now, we are embarking upon an equally ambitious venture to remake the legal information environment for the twenty-first century, in the digital era.  

"We should learn from advances in cloud computing, the digital naming systems, and youth media practices, as well as classical modes of librarianship, as we envision – and, together, build – a new system for recording, indexing, writing about, and teaching what we mean by the law. A new legal information environment, drawing comprehensively from contemporary technology, can improve access to justice by the traditionally disadvantaged, including persons with disabilities; enhance democracy; promote innovation and creativity in scholarship and teaching; and promote economic development. This new legal information architecture must be grounded in a reconceptualization of the public sector’s role and draw in private parties, such as Google, Amazon, Westlaw, and LexisNexis, as key intermediaries to legal information.  

"This new information environment will have unintended – and sometimes negative – consequences, too. This trajectory toward openness is likely to change the way that both professionals and the public view the law and the process of lawmaking. Hierarchies between those with specialized knowledge and power and those without will continue its erosion. Lawyers will have to rely upon an increasingly broad range of skills, rather than serving as gatekeepers to information, to command high wages, just as new gatekeepers emerge to play increasingly important roles in the legal process. The widespread availability of well-indexed digital copies of legal work-products will also affect the ways in which lawmakers of all types think and speak in ways that are hard to anticipate. One indirect effect of these changes, for instance, may be a greater receptivity on the part of lawmakers to calls for substantive information privacy rules for individuals in a digital age.  

"An effective new system will not emerge on its own; the digital environment, like the physical, is a built environment. As lawyers, teachers, researchers, and librarians, we share an interest in the way in which legal information is created, stored, accessed, manipulated, and preserved over the long term. We will have to work together to overcome several stumbling blocks, such as state-level assertions of copyright. As collaborators, we could design and develop it together over the next decade or so. The net result — if we get it right — will be improvements in the way we teach and learn about the law and how the system of justice functions" (http://blogs.law.harvard.edu/palfrey/2010/11/09/henry-n-ess-iii-chair-lecture-notes/, accessed 12-10-2010).

View Map + Bookmark Entry

Apple 1 Computers Sell for $210,000 in 2010, for $671,400 in 2013, for $905,000 and $365,000 in 2014 November 23, 2010 – October 22, 2014

An original Apple 1 personal computer in excellent condition but with a few later modifications, sold for 110,000 pounds or $174,000 hammer at a Christie's book and manuscript auction in London. (Christie's sale 7882, lot 65).

Associated Press reported that the purchaser was businessman and collector Marco Boglione of Torino, Italy, who bid by phone. His total cost came to 133,250 pounds or about $210,000 after the buyer's premium. Prior to the auction, Christie's estimated the computer would sell for between $160,000-$240,000. When it was released in 1976, the Apple I sold for $666.66.

Only about 200 Apple 1's were built, of which perhaps "30 to 50" remain in existence. The auctioned example came in its original box with a signed letter from Apple cofounder Steve Jobs.

Apple cofounder Steve Wozniak, who hand-built each of the Apple 1's, attended the auction, and offered to autograph the computer.  

See also: http://www.mercurynews.com/news/ci_16695428?source=rss&nclick_check=1, accessed 11-23-2010.

At Sotheby's in 2012 another Apple 1 sold for $374,500. In November 2012 still another Apple 1 sold for $640,000 at Auction Team Breker in Cologne, Germany.

On May 25, 2013 Uwe Breker auctioned another Apple 1 for $671,400.

On October 22, 2014 Bonhams in New York sold another Apple 1 for $905,000. The buyer was the Henry Ford Museum in Deerborn Michigan. "In addition to the beautifully intact motherboard, this Apple-1 comes with a vintage keyboard with pre-7400 series military spec chips, a vintage Sanyo monitor, a custom vintage power supply in wooden box, as well as two vintage tape-decks. The lot additionally includes ephemera from the Cincinnati AppleSiders such as their first newsletter "Poke-Apple" from February of 1979 and a video recording of Steve Wozniak's keynote speech at the 1980 'Applevention.' "

On December 11 Christie's in New York offered The Ricketts’ Apple-1 Personal Computer in an online auctionNamed after its first owner Charles Ricketts, this example was the only known surviving Apple-1 documented to have been sold directly by Steve Jobs to an individual from his parents’ garage.

"23 years after Ricketts bought the Apple-1 from Jobs in Los Altos, it was acquired by Bruce Waldack, a freshly minted entrepreneur who’d just sold his company DigitalNation.  The Ricketts Apple-1 was auctioned at a sheriff’s sale of Waldack’s property at a self-storage facility in Virginia in 2004, and won by the present consigner, the American collector, Bob Luther.

  • The Ricketts Apple-1 is fully operational, having been serviced and started by Apple-1 expert Corey Cohen in October 2014. Mr. Cohen ran the standard original software program, Microsoft BASIC, and also an original Apple-1 Star Trek game in order to test the machine.
  • The computer will be sold with the cancelled check from the original garage purchase on July 27, 1976 made out to Apple Computer by Charles Ricketts for $600, which Ricketts later labeled as “Purchased July 1976 from Steve Jobs in his parents’ garage in Los Altos”. 
  • A second cancelled check for $193 from August 5, 1976 is labeled “Software NA Programmed by Steve Jobs August 1976”. Although Jobs is not usually thought of as undertaking much of the programming himself, many accounts of the period place him in the middle of the action, soldering circuits and clearly making crucial adjustments for close customers, as in this case.
  • These checks were later used as part of the evidence for the City of Los Altos to designate the Jobs family home at 2066 Crist Drive as a Historic Resource, eligible for listing on the National Register of Historic Places, and copies can be found in the Apple Computer archives at Stanford University Libraries."

The price realized was $365,000, which was, of course, diaappointing compared to the much higher price realized on Bonhams only two months earlier.

View Map + Bookmark Entry

$1,300,000,000 Verdict in Software Copyright Infringement Suit Partially Vacated November 23, 2010 – September 1, 2011

In U.S. Federal Court in Oakland, California Oracle Corporation, based in Redwood Shores, California, won a $1,300,000,000 copyright infringement judgment against SAP AG, headquartered in Walldorf, Germany.

The judgment—an indication of the size and scale of the software industry— was a result of a lawsuit filed by Oracle in 2007 claiming that a unit of SAP U.S. made hundreds of thousands of illegal downloads and several thousand copies of Oracle’s software to avoid paying licensing fees, and in an attempt to steal customers. 

"The verdict, which came after one day of deliberations, is the biggest ever for copyright infringement and the largest U.S. jury award of 2010, according to Bloomberg data. The award is about equal to SAP’s forecasted net income for the fourth quarter, excluding some costs, according to the average estimate of analysts surveyed by Bloomberg. . . .

"The verdict is the 23rd-biggest jury award of all time, according to Bloomberg data. The largest jury award in a copyright-infringement case previously was $136 million verdict by a Los Angeles jury in 2002 in a Recording Industry Association of America lawsuit against Media Group Inc. for copying and distributing 1,500 songs by artists including Elvis Presley, Madonna and James Brown, according to Bloomberg data" (http://www.businessweek.com/news/2010-11-24/oracle-wins-1-3-billion-from-sap-in-downloading-case.html, accessed 11-24-2010).

On July 13, 2011, SAP filed a motion seeking judgment that actual damages should not be based on hypothetical licenses, and for a new trial for the amount of damages.

"On September 1, 2011, U.S. District Judge Phyllis Hamilton granted the judgment as a matter of law on the hypothetical license damages, and vacated the $1.3 billion award amount. In her ruling Judge Hamilton stated:

" 'Oracle’s suggestion – that upon proof of infringement, copyright plaintiffs are automatically entitled to seek “hypothetical” license damages because they are presumed to have suffered harm in the form of lost license fees – has no support in the law.'

"SAP's motion for a new trial was granted, conditioned on Oracle rejecting a remittitur of $272 million, the 'maximum amount of lost profits and infringer’s profits sustainable by the proof.' Judge Hamilton further stated:

" 'Determining a hypothetical license price requires an 'objective, not a subjective” analysis, and '[e]xcessively speculative' claims must be rejected.' " (Wikipedia article on Oracle Corporation v. SAP AG, accessed 04-24-2013).

View Map + Bookmark Entry

The Wikileaks U. S. Diplomatic Cables Leak November 28 – December 8, 2010

"The United States diplomatic cables leak began on 28 November 2010 when the website WikiLeaks and five major newspapers published confidential documents of detailed correspondences between the U.S. State Department and its diplomatic missions around the world. The publication of the U.S. embassy cables is the third in a series of U.S. classified document 'mega-leaks' distributed by WikiLeaks in 2010, following the Afghan War documents leak in July, and the Iraq War documents leak in October. The contents of the cables describe international affairs from 274 embassies dated from 1966–2010, containing diplomatic analysis of world leaders, an assessment of host countries, and a discussion about international and domestic issues.

"The first 291 of the 251,287 documents were published on 28 November, with simultaneous press coverage from El País (Spain), Le Monde (France), Der Spiegel (Germany), The Guardian (United Kingdom), and The New York Times (United States). Over 130,000 of the documents are unclassified; none are classified as 'top secret' on the classification scale; some 100,000 are labeled 'confidential'; and about 15,000 documents have the higher classification 'secret'. As of December 8, 2010 1060 individual cables had been released. WikiLeaks plans to release the entirety of the cables in phases over several months" (Wikipedia article on United States diplomatic cables leak, accessed 12-08-2010).

View Map + Bookmark Entry

Google Earth 6: Enhanced 3D, 3D Trees, Enhanced Historical Imagery November 30, 2010

Google Earth 6, introduced on November 30, 2010, enabled the user to "fly from outer space down to the streets with the new Street View and easily navigate. . . . Switch to ground-level view to see the same location in 3D."  

The program also introduced 3D trees in locations all over the world, and a more user-friendly interface for the historical imagery enabling comparison of recent and historical satellite imagery when available.

View Map + Bookmark Entry

Introduction of the Google Ngram Viewer December 2010

In December 2010 Google introduced the Google Ngram Viewer, a phrase-usage graphing tool developed by Jon Orwant and Will Brockman of Google that charts the yearly count of selected n-grams (contiguous sequences of n items from a given sequence of text or speech) in the Google Ngram word-search database. The words or phrases (or ngrams) are matched by case-sensitive spelling, comparing exact uppercase letters, and plotted on the graph if found in 40 or more books during each year of the requested year-range.

"The word-search database was created by Google Labs, based originally on 5.2 million books, published between 1500 and 2008, containing 500 billion words in American English, British English, French, German, Spanish, Russian, or Chinese. Italian words are counted by their use in other languages. A user of the Ngram tool has the option to select among the source languages for the word-search operations" (Wikipedia article on Google Ngram viewer, accessed 12-08-2013).

View Map + Bookmark Entry

The Google Earth Engine December 2, 2010

On December 2, 2010 Google introduced the Google Earth Engine, a cloud computing platform for processing satellite imagery and other Earth observation data. The engine provides access to a large warehouse of satellite imagery and the computational power needed to analyze those images. Initial applications of the platform included mapping the forests of Mexico, identifying water in the Congo basin, and detecting deforestation in the Amazon.


"Google Earth Engine brings together the world's satellite imagery—trillions of scientific measurements dating back more than 25 years—and makes it available online with tools for scientists, independent researchers, and nations to mine this massive warehouse of data to detect changes, map trends and quantify differences to the earth's surface" (http://earthengine.googlelabs.com/#intro).

"On February 11, [2013] NASA launched Landsat 8, the latest in a series of Earth observation satellites which started collecting information about the Earth in 1972. We're excited to announce that on May 30th, the USGS began releasing operational data from the Landsat 8 satellite, which are now available on Earth Engine. Explore the gallery below to see how we've used Landsat data to visualize thirty years of change across the entire planet. Congratulations to NASA and USGS for a successful launch!" (http://earthengine.google.org/#intro, accessed 10-20-2013). 

View Map + Bookmark Entry

Of the Seventy Online Databases that "Define Our Planet", Many are Known Only to Specialists December 3, 2010

As part of a description of a potential costly (1 billion euros) European scheme to simulate the entire planet earth, MIT's Technology Review published an article listing and describing the "The 70 Online Databases that Define Our Planet." These databases cover climate, health, finance, economics, traffic, and other topics. Only a few, such as the Wikipedia, Google Earth, the Wayback Machine at the Internet Archive, and Wolfram Alpha, are widely known to generalists, such as the author of From Cave Paintings to the Internet.

View Map + Bookmark Entry

The Google eBookstore Opens December 6, 2010

On December 6, 2010 Google opened its online digital bookstore, Google ebooks.

"Google executives described the e-bookstore as an 'open ecosystem' that will offer more than three million books, including hundreds of thousands for sale and millions free.  

"More than 4,000 publishers, including large trade book companies like Random House, Simon & Schuster and Macmillan, have made books available for sale through Google , many at prices that are identical to those of other e-bookstores.  

 " 'We really think it’s important that the book business have this open diversity of retail points, just like it does in print,' Tom Turvey, the director of strategic partnerships at Google, said in an interview. 'We want to make sure we maintain that and support that.' Customers can set up an account for buying books, store them in a central online, password-protected library and read them on personal computers, tablets, smartphones and e-readers. A Web connection will not be necessary to read a book, however; users can use a dedicated app that can be downloaded to an iPad, iPhone or Android phone.  

"A typical user could begin reading an e-book on an iPad at home, continue reading the same book on an Android phone on the subway and then pick it up again on a Web browser at the office, with the book opening each time to the place where the user left off.  

"The Google eBookstore could be a significant benefit to independent bookstores like Powell’s Books in Portland, Ore., that have signed on to sell Google e-books on their Web sites through Google — the first significant entry for independents into the e-book business.  

" 'This levels the playing field,' said Oren Teicher, the chief executive of the American Booksellers Association. 'If you want to buy e-books, you don’t just have to buy them from the big national outlets' (http://www.nytimes.com/2010/12/07/business/media/07ebookstore.html?hp

View Map + Bookmark Entry

The Website of MasterCard is Hacked by Wikileaks Supporters December 8, 2010

"The website of MasterCard has been hacked and partially paralysed in apparent revenge for the international credit card's decision to cease taking donations to WikiLeaks. A group of online activists calling themselves Anonymous appear to have orchestrated a DDOS ('distributed denial of service') attack on the site, bringing its service at www.mastercard.com to a halt for many users. " 'Operation: Payback' is the latest salvo in the increasingly febrile technological war over WikiLeaks. MasterCard announced on Monday that it would no longer process donations to the whistleblowing site, claiming it was engaged in illegal activity.  

"The group, which has been linked to the influential internet messageboard 4Chan, has been targeting commercial sites which have cut their ties with WikiLeaks. The Swiss bank PostFinance has already been targeted by Anonymous after it froze payments to WikiLeaks, and the group has vowed to target Paypal, which has also ceased processing payments to the site. Other possible targets are EveryDNS.net, which suspended dealings on 3 December, Amazon, which removed WikiLeaks content from its EC2 cloud on 1 December, and Visa, which suspended its own dealings yesterday.  

"The action was confirmed on Twitter at 9.39am by user @Anon_Operation, who later tweeted: 'WE ARE GLAD TO TELL YOU THAT http://www.mastercard.com/ is DOWN AND IT'S CONFIRMED! #ddos #wikileaks Operation:Payback(is a bitch!) #PAYBACK'

"No one from MasterCard could be reached for immediate comment, but a spokesman, Chris Monteiro, has said the site suspended dealings with WikiLeaks because 'MasterCard rules prohibit customers from directly or indirectly engaging in or facilitating any action that is illegal'.  

"DDOS attacks, which often involve flooding the target with requests so that it cannot cope with legitimate communication, are illegal" (http://www.guardian.co.uk/media/2010/dec/08/mastercard-hackers-wikileaks-revenge, accessed 12-08-2010).

View Map + Bookmark Entry

Bestsellers on eBook Readers: Romance Novels December 9, 2010

According to an article published in The New York Times, at the end of 2010 one of the hottest selling fields in ebooks was romance novels, which were also top-sellers in paperback. It turns out that many buyers preferred ordering romance novels online in privacy to buying them in public locations such as drug stores where they might run into people they knew. Many also preferred to read these on an ebook reader, especially in public places like buses or trains, so they didn't have to expose the racy nature of the novels, typically advertised in the graphics on the covers of paperback editions.

View Map + Bookmark Entry

U.S. E-Book Sales are Predicted to Reach $1,000,000,000 in 2010 December 11, 2010

American e-book sales will reach almost $1 billion by the end of 2010, according to new research.

"A report published by technology and market research company Forrester presented a five year forecast for e-books in the US. The firm surveyed 4,000 people for the report and found 2010 will end with $966m worth of e-books sold to consumers. By 2015 the industry will have nearly tripled to almost 3bn, a point at which Forrester said the industry will be 'forever altered'.

The study has also found e-book buying falls very low on the list of how people acquire books, with just 7% of adults who read books and are active online reading e-books. However, Forrester said this 7% 'read the most books and spend the most money on books'.

"A blog by Forrester researcher James McQuivey said: 'We have plenty of room to grow beyond the 7% that read e-books today and, once they get the hang of it, e-book readers quickly shift a majority of their book reading to a digital form. More e-book readers reading a greater percentage of their books in digital form means our nearly $3 billion figure in 2015 will be easy to hit, even if nothing else changes in the industry.'  

"McQuivey too urged publishers to take digital seriously in order to prepare for a day when 'physical book publishing is an adjunct activity that supports the digital publishing business' " (http://www.thebookseller.com/news/133944-us-e-book-sales-to-reach-1-billion.html, accessed 11-12-2010).

View Map + Bookmark Entry

The Digital Public Library of America December 13, 2010

On December 13, 2010 John Palfrey and The Berkman Center for Internet & Society at Harvard announced that it would begin coordinating plans for a Digital Public Library of America. This initiative was stimulated by an article published by Robert Darnton in the New York Review of Books on on October 4, 2010 entitled "A Library Without Walls."

Related to the Berkman Center's announcement, an article appeared in Libraryjournal.com by Michael Kelly on December 15, 2010: "New Plan Seeks a 'Big Tent' for a National Digital Library." 

View Map + Bookmark Entry

The Cultural Observatory at Harvard Introduces Culturomics December 16, 2010

On December 16, 2010 a highly interdisciplinary group of scientists, primarily from Harvard University: Jean-Baptiste Michel,Yuan Kui Shen, Aviva P. Aiden, Adrian Veres, Matthew K. Gray, The Google Books Team, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak and Erez Lieberman Aiden published "Quantitative Analysis of Culture Using Millions of Digitized Books," Science, Published Online December 16 2010 Science 14 January 2011: Vol. 331 no. 6014 pp. 176-182 DOI: 10.1126/science.1199644

The authors were associated with the following organizations: Program for Evolutionary Dynamics, Institute for Quantitative Social Sciences Department of Psychology, Department of Systems Biology Computer Science and Artificial Intelligence Laboratory, Harvard Medical School, Harvard College Google, Inc. Houghton Mifflin Harcourt Encyclopaedia Britannica, Inc. Department of Organismic and Evolutionary Biology Department of Mathematics, Broad Institute of Harvard and MITCambridge School of Engineering and Applied Sciences Harvard Society of Fellows, Laboratory-at-Large.

This paper from the Cultural Observatory at Harvard and collaborators represented the first major publication resulting from The Google Labs N-gram (Ngram) Viewer,

"the first tool of its kind, capable of precisely and rapidly quantifying cultural trends based on massive quantities of data. It is a gateway to culturomics! The browser is designed to enable you to examine the frequency of words (banana) or phrases ('United States of America') in books over time. You'll be searching through over 5.2 million books: ~4% of all books ever published" (http://www.culturomics.org/Resources/A-users-guide-to-culturomics, accessed 12-19-2010).

"We constructed a corpus of digitized texts containing about 4% of all books ever printed. Analysis of this corpus enables us to investigate cultural trends quantitatively. We survey the vast terrain of "culturomics", focusing on linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000. We show how this approach can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology. "Culturomics" extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities" (http://www.sciencemag.org/content/early/2010/12/15/science.1199644, accessed 12-19-2010).  

"The Cultural Observatory at Harvard is working to enable the quantitative study of human culture across societies and across centuries. We do this in three ways: Creating massive datasets relevant to human culture Using these datasets to power wholly new types of analysis Developing tools that enable researchers and the general public to query the data" (http://www.culturomics.org/cultural-observatory-at-harvard, accessed 12-19-2010). 

View Map + Bookmark Entry

3D Maps for Android Mobil Devices December 16, 2010

On December 16, 2010 Google announced Google Maps 5.0 for Android, with two significant new features: 3D interaction and offline reliability.

"In order to create these features, we rebuilt Maps using vector graphics to dynamically draw the map as you use it. Building a vector graphics engine capable of achieving the visual quality and performance level you expect from Google Maps was a major technical challenge and enables all sorts of future possibilities. So we wanted to give you a closer look under the hood at the technology driving the next generation of mobile maps.

". . . . Previously, Google Maps downloaded the map as sets of individual 256x256 pixel 'image tiles.'Each pre-rendered image tile was downloaded with its own section of map imagery, roads, labels and other features baked right in. Google Maps would download each tile as you needed it and then stitch sets together to form the map you see. It takes more than 360 billion tiles to cover the whole world at 20 zoom levels! Now, we use vector graphics to dynamically draw the map. Maps will download 'vector tiles' that describe the underlying geometry of the map. You can think of them as the blueprints needed to draw a map, instead of static map images. Because you only need to download the blueprints, the amount of data needed to draw maps from vector tiles is drastically less than when downloading pre-rendered image tiles. Google Maps isn’t the first mobile app to use vector graphics—in fact, Google Earth and our Navigation (Beta) feature do already. But a combination of modern device hardware and innovative engineering allow us to stream vector tiles efficiently and render them smoothly, while maintaining the speed and readability we require in Google Maps" (The Official Google Blog, 12-17-2010).

View Map + Bookmark Entry

An Interactive Pop-Up Children's Book App for the iPhone & iPad December 16, 2010

On December 16, 2010 GameCollage.com, based in Seattle, issued Three Little Pigs and the Secrets of a Popup Book for iPhone, iPod touch, and the iPad. The app, which cost $3.99, was an interactive children's book which allowed the reader to push, pull, spine, slide and explore interactive pages, and to see, in a virtual way, how the mechanism of the book would work if it were an actual paper pop-up book. The art was adapted from original illustrations by L. Leslie Brooke.  The app included a "whimsical sound track with colorful sound effects." When apples fell out of the tree, they fell in the direction the iPad was tipped. 

Unlike an actual popup book printed on paper, which might feature high quality paper, paper engineering, printing, and binding,  the app featured "silky smooth animation running at constant 60 frames per second," and a "highly polished user interface."

In December 2013 a video ad for the app was available from YouTube at this link

View Map + Bookmark Entry

eBooks Represent 9-10% of Trade-Book Sales December 23, 2010

According to an article in The New York Times, in December 2010 ebooks represented 9 to 10 percent of trade-book sales.

View Map + Bookmark Entry

Founder of Wikileaks to Publish his Autobiography December 27, 2010

To pay for ongoing defence costs, Australian journalist, publisher, and Internet activist Julian Assange, the founder of WikiLeaks, stated in December 2010 that he would release an autobiography next year, having signed publishing deals that he told a British newspaper might be worth $1.7 million. Apart from the censorship and political elements of this case, the book contract underlined the commercial distinctions between commercial book publishing and many websites which generate little or no revenue, as for example Wikileaks, which is intentionally non-profit.

"Mr. Assange told The Sunday Times of London that he had signed an $800,000 deal with Alfred A. Knopf, an imprint of Random House, in the United States, and a $500,000 deal with Canongate books in Britain. With further rights and serialization, he told the newspaper, he expected his earnings to rise to $1.7 million.  

"Paul Bogaards, a spokesman for Random House, said Monday that the book would be 'a complete account of his life through the present day, including the founding of WikiLeaks and the work he has done there.' The deal, Mr. Bogaards said, was initiated by one of Mr. Assange’s lawyers in mid-December and was signed in a matter of days. He would not discuss the financial terms. Canongate has not yet made a public comment but has spoken of its own deal in messages on Twitter.

“ 'I don’t want to write this book, but I have to,' Mr. Assange told the newspaper, explaining that his legal costs in fighting extradition to Sweden, where he is wanted for questioning about allegations of sexual misconduct, have reached more than $300,000. 'I need to defend myself and to keep WikiLeaks afloat,' he said.  

"Mr. Assange is under what he has called 'high-tech house arrest' in an English mansion while he awaits hearings, beginning Jan. 11, regarding those allegations. Two women in Stockholm have accused him of rape, unlawful coercion and two counts of sexual molestation over a four-day period last August. He has repeatedly denied any wrongdoing in the matter, and has called the case 'a smear campaign' led by those who seek to stop him from leaking classified government and corporate documents" (http://www.nytimes.com/2010/12/28/world/europe/28wiki.html?_r=1&hpw, accessed 12-28-2010).

View Map + Bookmark Entry

Facebook is the Most Searched for and Most Visited Website in America December 29, 2010

"Facebook was not only the most searched item of the year, but it passed Google as America’s most-visited website in 2010, according to a new report from Experian Hitwise.  

"For the second year in a row, 'facebook' was the top search term among U.S. Internet users. The search term accounted for 2.11% of all searches, according to Hitwise. Even more impressive is the fact that three other variations of Facebook made it into the top 10: “facebook login” at #2, 'facebook.com' at #6 and “www.facebook.com” at #9. Combined, they accounted for 3.48% of all searches, a 207% increase from Facebook’s position last year.  

"Rounding out the list of top search terms were YouTube, Craigslist, MySpace, eBay, Yahoo and Mapquest. Other companies that made big moves in terms of searches include Hulu, Netflix, Verizon and ESPN. The search term “games” also made its first appearance in the list of Hitwise’s top 50 search terms.  

"More interesting though is Facebook’s ascension to number one on Hitwise’s list of most-visited websites. The social network accounted for 8.93% of all U.S. visits in 2010 (January-November), beating Google (7.19%), Yahoo Mail (3.52%), Yahoo (3.30%) and YouTube (2.65%). However, Facebook didn’t beat the traffic garnered by all of Google’s properties combined (9.85%).  

"It’s only a matter of time until Facebook topples the entire Google empire, though. We’ve seen the trend develop for months: Facebook is getting bigger than Google. According to comScore, Facebook’s U.S. traffic grew by 55% in the last year and has shown no sign of slowing down" (http://mashable.com/2010/12/29/2010-the-year-facebook-dethroned-google-as-king-of-the-web-stats/, accessed 12-31-2010).

View Map + Bookmark Entry

Scanning Books in Libraries Instead of Making Photocopies 2011

Ristech, the motto of which was "Automation of Digitization," introduced the Book2net Spirit, which they described as:

"the very first entry level high resolution book scanner. The Spirit is designed to replace photocopies in Public, Government and Corporate Libraries. By eliminating the need for paper, toner and maintenance – Libraries can reduce cost. The Spirit can easily be attached to a cost recovery system or coin-op to generate revenue.

"Key Features:

• Public Use Walk-up BookScanner

• High Resolution Images

• 1 second image capture • Scan to USB or Email

• Embedded Touch Screen PC included"

View Map + Bookmark Entry

Post-Review Process Rather than Pre-Review Process in Publishing? 2011

"I want to suggest that the time has come for us to consider whether, really, we might all be better served by separating the question of credentialing from the publishing process, by allowing everything through the gate, and by designing a post-publication peer review process that focuses on how a scholarly text should be received rather than whether it should be out there in the first place" (Kathleen Fitzpatrick, Planned Obsolescence: Publishing, Technology, and the Future of the Academy [2011]). Fitzpatrick is Professor of Media Studies, Pomona College, Claremont, California.

View Map + Bookmark Entry

Google's Track of its Own Development 2011

In 2011 Google published an interactive timeline of developments in its corporate history, including the introduction of new products.  The Google Timeline, or "interactive timeline of Google history," spanned from 1995 to January-February 2009. (Accessed 12-12-2012).

View Map + Bookmark Entry

"Visual Complexity: Mapping Patterns of Information" 2011

In 2011 Manuel Lima, who characterized himself on his website as an "Interaction Designer, Information Architect, and Design Researcher," published Visual Complexity: Mapping Patterns of Information through Princeton Architectural Press in New York. This spectacular, modernistically designed, full color book may best be characterized as two books in one. Its first 80 pages are a profound intellectual and visual interpretation of landmarks in information visualization from the ancient world through the early twentieth century. The remainder of the book illustrates, analyzes and classifies types of state of the art visualizations of complex information sets. 

View Map + Bookmark Entry

An App the Promotes the Value of Impermanence 2011 – 2013

Photos and messages sent through an app called Snapchat, developed in Venice Beach, California, vanish in seconds. In a world where users know that any image or message posted in social media, or sent through email, may be preserved forever, Snapchat's feature of automatically deleting information rather than preserving it found a growing niche. The feature was popular enough for Facebook to develop a competing ap called Facebook Poke.

"Although Snapchat says that it cannot see or store copies of content, the service still allows nimble-fingered users to capture screenshots of photos. Mr. Murphy calls that mechanism a 'feature, not a vulnerability' of the service. Each time a screenshot of a Snapchat is taken, the sender is alerted that the image has been captured. There have also been reports of loopholes and hacks that let people save videos and screenshots. 'Nothing ever goes away on the Internet,' Mr. Spiegel acknowledged.  

"Snapchat has its origins at Stanford, where Mr. Spiegel and Mr. Murphy first met as fraternity brothers. Mr. Spiegel presented a prototype of Snapchat in spring 2011 to one of his classes, but it was greeted as impractical and silly by his classmates.  

"Undeterred, Mr. Spiegel and Mr. Murphy shared an updated version for the iPhone with about 20 friends in September 2011. A few weeks in, they started seeing an influx of new users, paired with unusual spikes in activity, peaking between 8 a.m. and 3 p.m.  

"It turned out the activity was centered around a high school in Orange County. Mr. Spiegel’s mother had told his cousin, who was a student at the school, about the app, which then spread throughout the school.

"Other high school students in Southern California picked it up, with the number of daily active users climbing from 3,000 to 30,000 in a month in early 2012. Mr. Spiegel took a leave from Stanford last June and Mr. Murphy quit his job and the pair raised a small round of financing and moved to Los Angeles to work on the application full time.  

"Since the overwhelming majority of Snapchat’s users are age 13 to 25, the application has provoked concerns from parents. The company acknowledges that the service can be misused, but does not dwell on it. 'We are not advertising ourselves as a secure platform,' Mr. Spiegel said. 'It’s a communication platform. It’s not our job to police the world or Snapchat of jerks' (http://www.nytimes.com/2013/02/09/technology/snapchat-a-growing-app-lets-you-see-it-then-you-dont.html, accessed 02-09-2013).

View Map + Bookmark Entry

Apple Color Emoji: The First Color Font Included in a Computer Operating System 2011

In 2011 Apple included the Apple Color Emoji in Mac OS X Lion. This was the first time a computer operating system included a color font.

"Of even more significance is the fact that the glyphs included in the font are Unicode encoded. In an effort initiated by Google and with significant help from Apple and Microsoft, 722 Emoji symbols were included in the recently published Unicode 6.0 standard, putting Emoji on par with the Latin alphabet and other writing systems encoded in Unicode. This means messages and documents containing Emoji are fully searchable and indexable, and Unicode Emoji fonts are included with Windows Phone 7.5 and the Windows 8 Developer Preview. The encoding effort was not without controversy, but effectively legitimizes nontraditional forms of written expression, and opens the door for the encoding of other symbols, including those found in popular symbol encoded fonts like Wingdings and Webdings" (http://typographica.org/typeface-reviews/apple-color-emoji, accessed 08-22-2013).

View Map + Bookmark Entry

Ethical principles for Designers, Builders and Users of Robots 2011

In 2011 the Engineering and Physical Sciences Research Council (EPRSC) and the Arts and Humanities Research Council (AHRC), both of which are based in Swindon, England,   published five ethical principles for designers, builders and users of robots in the real world, along with seven "high-level messages" supplementing the five ethical principles. The five ethical principles were:

  1. "Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot."

The seven "high-level messages" supplementing the five principles were:

  1. "We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.
  2. Bad practice hurts us all.
  3. Addressing obvious public concerns will help us all make progress.
  4. It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.
  5. To understand the context and consequences of our research, we should work with experts from other disciplines, including: social sciences, law, philosophy and the arts.
  6. We should consider the ethics of transparency: are there limits to what should be openly available?
  7. When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists" (Wikipedia article on Laws of Robotics, accessed 10-20-2013).
View Map + Bookmark Entry

Anonymous Scottish Book Sculptures 2011 – 2013

From 2011 to 2013 a group of remarkable book sculptures made out of old books by an anonymous female paper sculptor were strategically placed in cultural institutions were they would be found in Scotland, mostly in Edinburgh. The sculptures were primarily on topics concerning Scottish literature and poetry. 

"The initial group of book sculptures was a group of ten elaborate sculptures that were left around various cultural locations in Edinburgh, Scotland, between March and November 2011, as gifts to the cultural institutions and people of the city.The identity of the artist is unknown, although notes with some of the sculptures referred to the artist as 'she'. The sculptures were made from old books and were accompanied by gift labels which praised literacy and the love of words, and argued against library and other arts funding cuts. An eleventh sculpture was presented to author Ian Rankin, whose works featured prominently in many of the other sculptures. The ten Edinburgh sculptures were toured through Scotland in an exhibition in late 2012.

"The sculptor was then commissioned to produce five more book sculptures to be hidden in secret locations around Scotland as part of Book Week Scotland, which commenced in November 2012. Despite the commission, the artist has maintained her anonymity.

"The sculptor also made another gift sculpture in December 2012, which she anonymously presented to the Scottish Poetry Library, already the previous recipient of two of her earlier works.

"In May 2013 a new sculpture, featuring three eggs in a paper nest in a cardboard birdbox, was left in the Scottish Poetry library; this was followed by three baby birds in a nest, left at Leith Library" (Wikipedia article on Scottish Book Sculptures, accessed 10-27-2013).

When I wrote this entry in October 2013 some of the best collections of images of the anonymous book sculptures were available at:




View Map + Bookmark Entry

The Universal Short Title Catalogue is Founded 2011

In 2011 the Universal Short Title Catalogue (USTC) was initiated at the University of St. Andrews in Scotland. This created a database, the goal of which was to eventually document all books published on the continent of Europe and in Britain from the invention of printing to the end of the sixteenth century. The database aimed to provide access to full bibliographic information of all books, locations of surviving copies and, where available, digital full text editions that could be accessed through the database. In 2013 the USTC described approximately 350,000 editions and around 1.5 million surviving copies, located in over 5,000 libraries worldwide.

"The invention of printing in the fifteenth century revolutionised information culture, vastly multiplying the number of books in circulation. It had a transforming impact on the intellectual culture of the Renaissance. The invention attracted enormous attention, and the art of printing spread quickly through the European continent. In the next 150 years publishers brought out a huge number of texts in a large range of disciplines. These included thousands of Bibles as well as milestones of scientific publication. Printing also stimulated the production of new types of books, such as news pamphlets, and the influential propaganda works of the Protestant Reformation. Overall this amounted to a huge volume of books: at least 350,000 separate editions, a total of around two hundred million printed items.

"The history of print has always played a central role in the development of modern European society. Despite this, the knowledge base on which such interpretations are based is surprisingly flimsy. Astonishingly, to this point, it has never been possible to create a complete survey of all printed books in the first age of print.

"The corpus of materials is very large, and widely dispersed. Many books from the fifteenth and sixteenth centuries survive in only one copy. These unique items are distributed between around 6000 separate archives and libraries around the world.

"Until this point data on early printed books has been available principally through National Bibliographical projects. These have achieved some notable results. The German VD 16 has gathered information on some 90,000 editions published in German lands. The Italian Edit 16 lists 60,000 editions published in Italy. The English Short Title Catalogue is a comprehensive survey of all books published in English.

"Nevertheless, this tradition of national bibliography has two main drawbacks. Firstly, National Bibliographies are seldom complete. For practical and funding reasons the German VD 16 and Italian Edit 16 both confine their searches to books presently located in German and Italian libraries respectively. Yet sixteenth-century books were dispersed very widely; many books survive only in libraries far away from their place of production. Secondly, the coverage of Europe by these projects is far from comprehensive.

  • There has as yet been no complete survey of France, the third major language domain of early print.
  • There is no survey of printing in Spain and Portugal.
  • The surveys for Belgium and the Netherlands are seriously deficient.
  • Printing in Eastern Europe and Scandinavia has been surveyed only in small disparate projects dealing with a single language domain (Bohemia, Denmark, Hungary). There is no survey for Poland.

"The USTC will make good these deficiencies in a project with two strands by completing the coverage of European print by gathering comprehensive date on all parts of Europe lacking such a survey. Finally, the USTC will co-ordinate the merging of these resources with other cognate projects into a coherent, unified searchable database" (Wikipedia article on Universal Short Title Catalogue, accessed 11-26-2013).

View Map + Bookmark Entry

Can an Artificial Intelligence Get into the University of Tokyo? 2011

In 2011 National Institute of Informatics in Japan initiated the Todai Robot Project with the goal of achieving a high score on the National Center Test for University Admissions by 2016, and passing the University of Tokyo entrance exam in 2021. 

"INTERVIEW WITH Yusuke Miyao, June 2013

Associate Professor, Digital Content and Media Sciences Research Division, NII; Associate Professor, Department of Informatics; "Todai Robot Project" Sub-Project Director 

Can a Robot Get Into the University of Tokyo? 
The Challenges Faced by the Todai Robot Project

Tainaka Could you tell us the objectives of the project?
Miyao We are researching the process of thinking by developing a computer program that will be able to pass the University of Tokyo entrance exam. The program will need to integrate multiple artificial intelligence technologies, such as language understanding, in order to develop all of the processes, from reading the question to determining the correct answer. While the process of thinking is first-nature to people, many of the processes involved in mental computation are still mysteries, so the project will be taking on challenges that previous artificial intelligence research has yet to touch.
Tainaka You're not going to making a physical robot?
Miyao No. What we'll be making is a robot brain. It won't be an actual robot that walks through the gate, goes to the testing site, picks up a pencil, and answers the questions.
Tainaka Why was passing the university entrance exam selected as the project's goal?
Miyao The key point is that what's difficult for people is different than what's difficult for computers. Computers excel at calculation, and can beat professional chess and shogi players at their games. IBM's "Watson" question-answering system*1 became a quiz show world champion. For a person, beating a professional shogi player is far harder than passing the University of Tokyo entrance exam, but for a computer, shogi is easier. What makes the University of Tokyo entrance exam harder is that the rules are less clearly defined than they are for shogi or a quiz show. From the perspective of using knowledge and data to answer questions, the university entrance exam requires a more human-like approach to information processing. However, it does not rely as much on common sense as an elementary school exam or everyday life, so it's a reasonable target for the next step in artificial intelligence research.
Tainaka Elementary school exam questions are more difficult?
Miyao For example, consider the sentence "Assuming there is a factory that can build 3 cars per day, how many days would it take to build 12 cars?" A computer would not be able to create a formula that expresses this in the same way a person could, near-instantaneously. It wouldn't understand the concepts of "car" or "factory", so it wouldn't be able to understand the relationship between them. Compared to that, calculating integrals is far easier.
Tainaka The National Center Test for University Admissions is multiple choice, and the second-stage exam is a short answer exam, right?
Miyao Of course, the center test is easier, and it has clear right and wrong answers, making it easier to grade. For the second-stage exam, examinees must give written answers, so during the latter half of the project, we will be shifting our focus on creating answers which are clear and comprehensible to human readers.
Tainaka Does the difficulty vary by test subject?
Miyao What varies more than the difficulty itself are the issues that have to be tackled by artificial intelligence research. The social studies questions, which test knowledge, rely on memory, so one might assume they would be easy for computers, but it's actually difficult for a computer to determine if the text of a problem corresponds to knowledge the computer possesses. What makes that identification possible is "Textual Entailment Recognition"*2, an area in which we are making progress, but still face many challenges. Ethics questions, on the other hand, frequently cover common sense, and require the reader to understand the Japanese language, so they are especially difficult for computers, which lack this common sense. Personally, I had a hard time with questions requiring memorization, so I picked ethics. (laughs)
Tainaka So ethics and language questions are difficult because they involve common sense.
Miyao Similar challenges are encountered with English, other than the common sense issue. For example, English questions include fill-in-the-blank questions, but it's difficult to pick natural conversational answers without actual life experience. Reading comprehension questions test logical and rational thought, but it's not really clear what this "logical and rational thought" consists of. The question, then, is how to teach "logical and rational thought" to computers. Also, for any subject, questions sometimes include photos, graphs, and comic strips. Humans understand them unconsciously, but it's extremely difficult to have computers understand them.
Tainaka Aren't mathematical formula questions easy to answer?
Miyao If they were presented as pure formulas, computers would excel at them, but the reality is not so simple. The questions themselves are written in natural language, making it difficult to map to the non-linguistic world of formulas. The same difficulty can be found with numerical fields, like physics or chemistry, or in fields which are difficult to convert into computer-interpretable symbols, such as the emotional and situational experience of reading a novel. That's what makes elementary school exams difficult.
Tainaka There are a mountain of problems.
Miyao There are many problems that nobody has yet taken on. That's what makes it challenging, and it's very exciting working with people from different fields. Looking at the practical results of this project, our discoveries and developments will be adapted for use in general purpose systems, such as meaning-based searching and conversation systems, real-world robot interfaces, and the like. The Todai Robot Project covers a diverse range of research fields, and NII plans to build an infrastructure, organizing data and creating platforms, and bring in researchers from both inside and outside Japan to achieve our objectives. In the future we will build an even more open platform, creating opportunities for members of the general public to participate as well, and I hope anyone motivated will take part" (http://21robot.org/introduce/NII-Interview/, accessed 12-30-2013).
View Map + Bookmark Entry

The Smartphone Becomes the CPU of the Laptop January 2011

Motorola Mobility, headquartered in Libertyville, Illinois, introduced the Atrix 4G smartphone powered by Nvidia's Tegra 2 dual-core  processor and Android 2.2, with a 4-inch display, 1 GB of RAM, 16 GB of on-board storage, front- and rear-facing cameras, a 1930 mAh battery and a fingerprint reader. Motorola announced that it would also sell laptop and desktop docks that run a full version of Firefox, powered entirely by the phone.

What was significant about this smartphone was that the phone could do the information processing for the laptop or even the desktop interfaces.

View Map + Bookmark Entry

The First Independently Published Magazine Exclusively for the iPad January 2011

London-based Remi Paringaux and his company, Meri Media, published the first issue of Post, the first independent magazine published exclusively for the iPad. It was offered for sale as an iPad app for $2.99.  

The New York Times characterized the publication as "A Magazine that Won't Smudge."

Postmatter.com described the project in this way:

"Post is a project born of love for magazines, and one dedicated to taking that love beyond paper and physical matter. A new frontier and paradigm in publishing, Post looks beyond the traditional rules of how and what magazines 'should be', in favour of speculating upon what magazines could be. It is about fashion, art, architecture, cinema, music, culture. It is about what's exciting now and tomorrow.

"Post is an only child, born of the iPad, with no printed sibling to imitate or be intimated by. Liberated from the imposing heritage of print culture, Post exists an entirely virtual realm, yet is intimately connected to material through the medium of touch. Inherently interactive Post presents a truly multimedia, mult-sensory journey from the first frame to the last, where the advertisements all built for Post by Post are immerse, tactile experiences.

"Post is not a thing. It is an idea. A non-surface whose pages dissolve and reform at your touch. It is material for the mind, the eyes, and sometimes the ears. An entire world existing only with a plane of smooth glass, tangibly alive, but cool to the touch. Let Post be your guide" (accessed 05-25-2011).

View Map + Bookmark Entry

Universal Music Group Donates a "Mile of Music" to the Library of Congress January 10, 2011

The Universal Music Group, headquartered in Santa Monica, California, which traces its origins to 1898, donated its archive of recorded music, consisting of circa 200,000 metal, glass and lacquer master discs, recorded from 1926 to 1948, to the Library of Congress.  The agreement called for the Library of Congress to own and preserve the music and to convert it to digital form for usability and long-term data preservation. Universal Music Group retained the right to commercialize the digital files.

"Under the agreement negotiated during discussions that began two years ago the Library of Congress has been granted ownership of the physical discs and plans to preserve and digitize them. But Universal, a subsidiary of the French media conglomerate Vivendi that was formerly known as the Music Corporation of America, or MCA, retains both the copyright to the music recorded on the discs and the right to commercialize that music after it has been digitized.  

“The thinking behind this is that we have a very complementary relationship,” said Vinnie Freda, executive vice president for digital logistics and business services at Universal Music Logistics. “I’ve been trying to figure out a way to economically preserve these masters in a digital format, and the library is interested in making historically important material available. So they will preserve the physical masters for us and make them available to academics and anyone who goes to the library, and Universal retains the right to commercially exploit the masters.”  

"The agreement will also permit the Web site of the Library of Congress to stream some of the recordings for listeners around the world once they are cataloged and digitized, a process that Mr. DeAnna said could take five years or more, depending on government appropriations. But both sides said it had not yet been determined which songs would be made available, a process that could be complicated by Universal’s plans to sell some of the digitized material through iTunes.  

"Universal’s bequest is the second time in recent months that a historic archive of popular music has been handed over to a nonprofit institution dedicated to preserving America’s recorded musical heritage. Last spring the National Jazz Museum in Harlem acquired nearly 1,000 discs, transcribed from radio broadcasts in the late 1930s and early 1940s by the recording engineer William Savory, featuring some of the biggest names in jazz" (http://www.nytimes.com/2011/01/10/arts/music/10masters.html?hp, accessed 01-10-2011).

View Map + Bookmark Entry

Voice-Activated Translation on Cell Phones January 12, 2011

Google introduced an improved Google Translate for Android Conversation Mode: 

"This is a new interface within Google Translate that’s optimized to allow you to communicate fluidly with a nearby person in another language. You may have seen an early demo a few months ago, and today you can try it yourself on your Android device.  

"Currently, you can only use Conversation Mode when translating between English and Spanish. In conversation mode, simply press the microphone for your language and start speaking. Google Translate will translate your speech and read the translation out loud. Your conversation partner can then respond in their language, and you’ll hear the translation spoken back to you. Because this technology is still in alpha, factors like regional accents, background noise or rapid speech may make it difficult to understand what you’re saying. Even with these caveats, we’re excited about the future promise of this technology to be able to help people connect across languages" (http://googleblog.blogspot.com/2011/01/new-look-for-google-translate-for.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+blogspot/MKuf+(Official+Google+Blog), accessed 01-14-2011.

View Map + Bookmark Entry

Probably the Largest Digital Image January 13, 2011

The Sloan Digital Sky Survey-III (SDSS-III), a major multi-filter imaging and spectroscopic redshift survey using a dedicated 2.5-m wide-angle optical telescope at Apache Point Observatory, Sunspot, New Mexico,  released the largest digital color image of the sky assembled from millions of 2.8 megapixel images, and consisting of more than a trillion pixels.  This may be the largest digital image produced to date.

View Map + Bookmark Entry

The Wikipedia Celebrates its Tenth Anniversary January 15, 2011

The Wikipedia celebrated its tenth anniversary with 448 events around the world.

View Map + Bookmark Entry

More than Ten Billion Apps are Downloaded from the Apple App Store January 22, 2011

On January 22, 2011 the Apple App Store completed its countdown for its Ten Billionth App downloaded from the Apple App Store.

View Map + Bookmark Entry

Publishing Non-Fiction Exclusively for Cell Phones, eBook Readers and Tablet Computers January 28, 2011

Launching of Brooklyn, New York-based The Atavist, a publisher of non-fiction longer than magazine articles, but shorter than normal book-length, intended for ebook readers, cell phones, and tablet computers. Their first publication was Piano Demon

View Map + Bookmark Entry

The New York Times Begins its "Recommendations Service" January 31, 2011

The New York Times rolled out its interactive Recommendations service. When I first looked at this on February 2, 2011 the service reported that I had read 120 articles in the previous month, breaking them down into ten categories. Based on my previous reading history it recommended that I read twenty articles in that day's edition.

View Map + Bookmark Entry

Confession: A Roman Catholic iPhone App February 2011

Confession: A Roman Catholic App by Little i Apps, LLC, South Bend, Indiana:

"Designed to be used in the confessional, this app is the perfect aid for every penitent. With a personalized examination of conscience for each user, password protected profiles, and a step-by-step guide to the sacrament, this app invites Catholics to prayerfully prepare for and participate in the Rite of Penance. Individuals who have been away from the sacrament for some time will find Confession: A Roman Catholic App to be a useful and inviting tool.  

"The text of this app was developed in collaboration with Rev. Thomas G. Weinandy, OFM, Executive Director of the Secretariat for Doctrine and Pastoral Practices of the United States Conference of Catholic Bishops, and Rev. Dan Scheidt, pastor of Queen of Peace Catholic Church in Mishawaka, IN. The app received an imprimatur from Bishop Kevin C. Rhodes of the Diocese of Fort Wayne – South Bend. It is the first known imprimatur to be given for an iPhone/iPad app.

From one of our users which we stand by:


"it does not and can not take the place of confessing before a validly ordained Roman Catholic priest in a Confessional, in person, either face to face, or behind the screen. Why? Because the Congregation on Divine Worship and the Sacraments has long ruled that Confessions by electronic media are invalid and that ABSOLUTION BY THE PRIEST must be given in person because the Seal of the Confessional must be Protected and for the Sacrament to be valid there has to be both the matter and the form which means THE PRIEST.

============================ -

"Custom examination of Conscience based upon age, sex, and vocation (single, married, priest, or religious)

"- Multiple user support with password protected accounts

"- Ability to add sins not listed in standard examination of conscience - Confession walkthrough including time of last confession in days, weeks, months, and years

"- Choose from 7 different acts of contrition

"- Custom interface for iPad

"- Full retina display support" (http://itunes.apple.com/us/app/confession-a-roman-catholic/id416019676?mt=8#, accessed 02-11-2011)

Cost: $1.99

View Map + Bookmark Entry

42.3% of the U.S. Population Uses Facebook February 2011

"A new report from eMarketer finds that most adult Americans with Internet access use Facebook at least once a month, and a full 42.3% of the entire American population was using the site as of this month.  

"By contrast, Twitter‘s penetration rate was much lower, sitting at around 7% of the total population and 9% of the Internet-using population, according to the report" (http://mashable.com/2011/02/24/facebook-twitter-number/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+Mashable+%28Mashable%29

View Map + Bookmark Entry

The Google Art Project February 1, 2011

Bringing technology developed for Street View indoors, Google introduced The Art Project.  Simultaneously they introduced an Art Project channel on YouTube.

These projects allowed you to take virtual tours of major museums, view relevant background material about art, store high resolution images, share images and commentaries with friends.

Each of the 17 museums involved also chose one artwork to be photographed using gigapixel photo capturing technology, resulting in an image on the computer containing seven billion pixels and providing detail not visible to the naked eye.

View Map + Bookmark Entry

4.3 Billion IP Addresses Have Been Allocated February 3, 2011

The Internet Corporation for Assigned Names and Numbers (icann.org) announced that the last remaining IPv4 (Internet Protocol version 4) Internet addresses from the central pool of about 4.3 billlion were allocated.

The next Internet protocol, IPv6, will open up a pool of Internet addresses that is a billion-trillion times larger than the total pool of IPv4 addresses--a supply that should be sufficient for the foreseeable future. 

View Map + Bookmark Entry

Worldwide Technological Capacity to Store, Communicate, and Compute Information February 10, 2011

On February 10, 2011 social scientist Martin Hilbert of the University of Southern California (USC) and information scientist Priscilla López of the Open University of Catalonia published "The World's Technological Capacity to Store, Communicate, and Compute Information." The report appeared first in Science Express; on April 1, 2011 it was published in Science, 332, 60-64. This was "the first time-series study to quantify humankind's ability to handle information." Notably, the authors did not attempt to address the information processing done by human brains—possibly impossible to quantify at the present time, if ever. 

"We estimated the world’s technological capacity to store, communicate, and compute information, tracking 60 analog and digital technologies during the period from 1986 to 2007. In 2007, humankind was able to store 2.9 × 10 20 optimally compressed bytes, communicate almost 2 × 10 21 bytes, and carry out 6.4 × 10 18 instructions per second on general-purpose computers. General-purpose computing capacity grew at an annual rate of 58%. The world’s capacity for bidirectional telecommunication grew at 28% per year, closely followed by the increase in globally stored information (23%). Humankind’s capacity for unidirectional information diffusion through broadcasting channels has experienced comparatively modest annual growth (6%). Telecommunication has been dominated by digital technologies since 1990 (99.9% in digital format in 2007), and the majority of our technological memory has been in digital format since the early 2000s (94% digital in 2007)" (The authors' summary).

"To put our findings in perspective, the 6.4 × 10 18 instructions per second that humankind can carry out on its general-purpose computers in 2007 are in the same ballpark area as the maximum number of nerve impulses executed by one human brain per second (10 17 ). The 2.4 × 10 21 bits stored by humanity in all of its technological devices in 2007 is approaching an order of magnitude of the roughly 10 23 bits stored in the DNA of a human adult, but it is still minuscule as compared with the 10 90 bits stored in the observable universe. However, in contrast to natural information processing, the world’s technological information processing capacities are quickly growing at clearly exponential rates" (Conclusion of the paper).

"Looking at both digital memory and analog devices, the researchers calculate that humankind is able to store at least 295 exabytes of information. (Yes, that's a number with 20 zeroes in it.)

"Put another way, if a single star is a bit of information, that's a galaxy of information for every person in the world. That's 315 times the number of grains of sand in the world. But it's still less than one percent of the information that is stored in all the DNA molecules of a human being. 2002 could be considered the beginning of the digital age, the first year worldwide digital storage capacity overtook total analog capacity. As of 2007, almost 94 percent of our memory is in digital form.

"In 2007, humankind successfully sent 1.9 zettabytes of information through broadcast technology such as televisions and GPS. That's equivalent to every person in the world reading 174 newspapers every day. On two-way communications technology, such as cell phones, humankind shared 65 exabytes of information through telecommunications in 2007, the equivalent of every person in the world communicating the contents of six newspapers every day.

"In 2007, all the general-purpose computers in the world computed 6.4 x 10^18 instructions per second, in the same general order of magnitude as the number of nerve impulses executed by a single human brain. Doing these instructions by hand would take 2,200 times the period since the Big Bang.

"From 1986 to 2007, the period of time examined in the study, worldwide computing capacity grew 58 percent a year, ten times faster than the United States' GDP. Telecommunications grew 28 percent annually, and storage capacity grew 23 percent a year" (http://www.sciencedaily.com/releases/2011/02/110210141219.htm)

View Map + Bookmark Entry

The New York Times Begins Ranking eBook Best Sellers February 11, 2011

On February 11, 2011 The New York Times introduced best-seller lists including ebook (e-book) best sellers. They offered rankings of titles when print and e-book sales were combined, and also offered separate rankings for print and e-book titles in fiction and non-fiction categories, hardcover and paperback, etc.

"This week’s Book Review introduces revamped best-seller lists, the result of many months of planning, research and design.  

"On the Web, there are three entirely new lists. One consists of rankings for fiction and nonfiction that combine print and e-book sales; one is limited exclusively to e-book sales for fiction and nonfiction; and the third, Web-only list tracks combined print sales — of both hardcover and paperback editions — for fiction and nonfiction.

All the other lists, though presented in reworked formats, will be familiar to readers. The Book Review’s related columns — TBR: Inside the List, Editors’ Choice and Paperback Row, all written by Book Review editors — remain in their accustomed places in print and online. We continue to offer extended rankings, a full methodology and a list archive online.  

"As before, The Times’s News Surveys department, which directs the paper’s polling operations, including its political and election polls, will collect and analyze the data reflected in each list."

View Map + Bookmark Entry

IBM's Watson Question Answering System Defeats Humans at Jeopardy! February 14 – February 16, 2011

LOn February 14, 2011 IBM's Watson question answering system supercomputer, developed at IBM's T J Watson Research Center, Yorktown Heights, New York, running DeepQA software, defeated the two best human Jeopardy! players, Ken Jennings and Brad Rutter. Watson's hardware consisted of 90 IBM Power 750 Express servers. Each server utilized a 3.5 GHz POWER7 eight-core processor, with four threads per core. The system operatesd with 16 terabytes of RAM.

The success of the machine underlines very significant advances in deep analytics and the ability of a machine to process unstructured data, and especially to intepret and speak natural language.

"Watson is an effort by I.B.M. researchers to advance a set of techniques used to process human language. It provides striking evidence that computing systems will no longer be limited to responding to simple commands. Machines will increasingly be able to pick apart jargon, nuance and even riddles. In attacking the problem of the ambiguity of human language, computer science is now closing in on what researchers refer to as the “Paris Hilton problem” — the ability, for example, to determine whether a query is being made by someone who is trying to reserve a hotel in France, or simply to pass time surfing the Internet.  

"If, as many predict, Watson defeats its human opponents on Wednesday, much will be made of the philosophical consequences of the machine’s achievement. Moreover, the I.B.M. demonstration also foretells profound sociological and economic changes.  

"Traditionally, economists have argued that while new forms of automation may displace jobs in the short run, over longer periods of time economic growth and job creation have continued to outpace any job-killing technologies. For example, over the past century and a half the shift from being a largely agrarian society to one in which less than 1 percent of the United States labor force is in agriculture is frequently cited as evidence of the economy’s ability to reinvent itself.  

"That, however, was before machines began to 'understand' human language. Rapid progress in natural language processing is beginning to lead to a new wave of automation that promises to transform areas of the economy that have until now been untouched by technological change.  

" 'As designers of tools and products and technologies we should think more about these issues,' said Pattie Maes, a computer scientist at the M.I.T. Media Lab. Not only do designers face ethical issues, she argues, but increasingly as skills that were once exclusively human are simulated by machines, their designers are faced with the challenge of rethinking what it means to be human.  

"I.B.M.’s executives have said they intend to commercialize Watson to provide a new class of question-answering systems in business, education and medicine. The repercussions of such technology are unknown, but it is possible, for example, to envision systems that replace not only human experts, but hundreds of thousands of well-paying jobs throughout the economy and around the globe. Virtually any job that now involves answering questions and conducting commercial transactions by telephone will soon be at risk. It is only necessary to consider how quickly A.T.M.’s displaced human bank tellers to have an idea of what could happen" (John Markoff,"A Fight to Win the Future: Computers vs. Humans," http://www.nytimes.com/2011/02/15/science/15essay.html?hp, accessed 02-17-2011).

♦ As a result of this technological triumph, IBM took the unusal step of building a colorful website concerning all aspects of Watson, including numerous embedded videos.

♦ A few of many articles on the match published during or immediately after it included:

John Markoff, "Computer Wins on 'Jeopardy!': Trivial, It's Not," http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?hpw

Samara Lynn, "Dissecting IBM Watson's Jeopardy! Game," PC Magazinehttp://www.pcmag.com/article2/0,2817,2380351,00.asp

John C. Dvorak, "Watson is Creaming the Humans. I Cry Foul," PC Magazinehttp://www.pcmag.com/article2/0,2817,2380451,00.asp

Henry Lieberman published a three-part article in MIT Technology Review, "A Worthwhile Contest for Artificial Intelligence" http://www.technologyreview.com/blog/guest/26391/?nlid=4132

♦ An article which discussed the weaknesses of Watson versus a human in Jeopardy! was Greg Lindsay, "How I Beat IBM's Watson at Jeopardy! (3 Times)" http://www.fastcompany.com/1726969/how-i-beat-ibms-watson-at-jeopardy-3-times

♦ An opinion column emphasizing the limitations of Watson compared to the human brain was Stanley Fish, "What Did Watson the Computer Do?" http://opinionator.blogs.nytimes.com/2011/02/21/what-did-watson-the-computer-do/

♦ A critical response to Stanley Fish's column by Sean Dorrance Kelly and Hubert Dreyfus, author of What Computers Can't Dowas published in The New York Times at: http://opinionator.blogs.nytimes.com/2011/02/28/watson-still-cant-think/?nl=opinion&emc=tya1

View Map + Bookmark Entry

Borders Files Chapter 11 Bankruptcy & Closes the Last of its Bookstores February 16 – September 18, 2011

In February 2011 Borders, the second largest brick and mortar bookstore chain in the United States, headquartered in Ann Arbor, Michigan, filed chapter 11 bankruptcy. Borders' stock closed at 25 cents on February 12, 2011, reflecting expectations that the chain could be liquidated. The company was unable to compete adequately against Internet booksellers led by Amazon.com, or against the leading brick and mortar chain, Barnes and Noble. By September 18, 2011 Borders closed the last of its more than 1200 bookstores across the United States. This was reflective of a larger trend of brick and mortar bookstore closures, in which more than a thousand American bookstores closed between 2000 and 2007.

"Sales at Borders declined by double-digit percentage rates in 2008, 2009 and in each quarter in 2010 it has reported.

"Borders, which has 6,100 full time staff, operates 508 namesake superstores as well as a chain of smaller Waldenbooks stores.

"The company said it would close about 30 percent of its stores in the next several weeks and plans to continue to pay its employees.

"Borders' largest unsecured creditors include major publishers that provide the books it sells. Borders owes Pearson PLC's Penguin $41.2 million, Hachette Book Group USA $36.9 million, and CBS's Simon & Schuster $33.8 million, according to court documents.

"The case is In re: Borders Group Inc, U.S. Bankruptcy Court, Southern District of New York, No: 11-10614" (http://www.huffingtonpost.com/2011/02/16/borders-files-for-bankruptcy_n_823889.html?. accessed 11-16-2011).

View Map + Bookmark Entry

Two Billion People Now Use the Internet Regularly February 17, 2011

According to an article in The New York Times, two billion people in the world used the Internet regularly.

In rural America only 60% had broadband connections. 

"Over all, 28 percent of Americans do not use the Internet at all."

View Map + Bookmark Entry

The U. S. National Broadband Map February 17, 2011

The National Broadband Map (NBM), a searchable and interactive website that allows users to view broadband availability across every neighborhood in the United States, was first published.

The NBM was created by the U. S. National Telecommunications and Information Administration (NTIA), in collaboration with the Federal Communications Commission (FCC), and in partnership with 50 states, five territories and the District of Columbia. The NBM is a project of NTIA's State Broadband Initiative. The NBM will be updated approximately every six months. 

View Map + Bookmark Entry

The Second Best-Selling Book in America Priced Like an App (99 Cents) February 25, 2011

The second best-selling book in America, a thriller by Lisa Gardner called Alone, first published in hardback in 2005 at a list price of $25, was made available in February 2011 as an ebook for $0.99.  The ebook sales of this novel drove driven it to the top of bestseller lists. 

"There are a few things going on here that are notable:

"1. Lisa Gardner's latest thriller hits shelves on March 8, so it's clear that her publisher decided to release this older title at a steep discount in order to generate buzz around this author.

"2. Books, like other media, are suddenly being priced like apps. This has far-reaching implications for how all media will be priced in the future, and could indicate a race to the bottom as consumers become increasingly unwilling to pay a premium for new titles when classics come cheap" (http://www.technologyreview.com/blog/mimssbits/26437/?nlid=4177, accessed 02-28-2011).

View Map + Bookmark Entry

Four Phases of Government Internet Surveillance and Censorship to Date February 25, 2011

Harvard Law professor, and Vice Dean, Library and Information Services, John Palfrey of the OpenNet Initiative wrote in "Middle East Conflict and and Internet Tipping Point" that the OpenNet Initiative had divided the way in which states filtered and practice surveillance over the Internet into four phases: "open Internet," "access denied," "access controlled," and "access contested."

"The first is the 'open Internet' period, from the network's birth through about 2000. In this period, there were few restrictions on the network globally. There was even an argument about whether the network could itself be regulated. This sense of unfettered freedom is a distant memory today.

"In the 'access denied' period that followed, through about 2005, states like China, Saudi Arabia, Tunisia, and dozens of others began to block access to certain information online. They developed technical Internet filtering modes to stop people from reaching certain websites, commonly including material deemed sensitive for political, cultural, or religious reasons.

"The most recent period, 'access controlled,' through 2010 or so, was characterized by the growth in the sophistication with which states began to control the flow of information online. Internet filtering grew in scope and scale, especially throughout Asia, the former Soviet states, and the Middle East and North Africa. Techniques to use the network for surveillance grew dramatically, as did "just-in-time" blocking approaches such as the use of distributed denial-of-service attacks against undesirable content. Overall, states got much more effective at pushing back on the use of the Internet by those who wished to share information broadly and for prodemocratic purposes.

"Today, we are entering a period that we should call 'access contested.' Activists around the world are pushing back on the denial of access and controls put in place by states that wish to restrict the free flow of information. This round of the contest, at least in the Middle East and North Africa, is being won by those who are using the network to organize against autocratic regimes. Online communities such as Herdict.org and peer-to-peer technologies like mesh networking provide specific ways for people to get involved directly in shaping how these technologies develop around the world" (http://www.technologyreview.com/web/32437/?p1=A1, accessed 02-28-2011).

View Map + Bookmark Entry

The Environmental Impacts of eBooks and eBook Readers March 2011

The Green Press Initiative issued a synthesis of various reports on The Environmental Impacts of eBooks and eBook Readers:

"Since the data suggests that sales of E-books are likely to increase while sales of printed books are likely to decrease, it is logical to question the environmental implications of this transition. In 2008 Green Press Initiative and the Book Industry Study Group commissioned a report on the environmental impacts of the U.S. book industry which included a lifecycle analysis of printed books. That report concluded that in 2006 the U.S. book industry consumed approximately 30 million trees and had a carbon footprint equivalent to 12.4 million metric tons of carbon dioxide, or 8.85 pounds per book sold.

"Determining the environmental impacts of an E-book presents a challenge that does not exist in estimating the impacts of a paper book. That challenge is the fact that user behavior will significantly influence the impact of an e-book. This is due to the fact that the manufacturing of the E-reader device accounts for the vast majority of an E-books environmental impact. Because of this, on a per book basis, a reader who reads 100 books on an e-reader will have almost 1/100th of the impact of someone who reads only one book on the same device. Additionally, two readers who each read the same number of books per year, can have a very different per-book environmental impact if one buys a new E-reader every year while the other keeps his for four years before replacing it. Because of the impact that user behavior can have on the environmental impact of E-books, any analysis will either have to make assumptions about the behavior of a “typical” reader of E-books, or else identify a break-even point in terms of the number of books that must be read on an E-reader to offset the environmental impacts of a corresponding number of paper books. However even this can be confused by the fact that it is not clear that reading one E-book offsets one paper book. For example, the ability to instantly download any book at any time may encourage E-reader owners to read more books in which case each e-book read would not necessarily correspond to a printed book that would have been read. Additionally someone who buys a printed book and later lends it to a friend to read would in effect halve the environmental impact of reading that book. As such any analysis should strive to account for this and determine a break-even point in terms of “printed books offset” rather than E-books read. Additional complexity is added by the fact that most E-readers can be used for a variety of tasks other than reading books. For example, most can read newspapers and magazines in addition to books and some E-readers can also read blogs and surf the internet. Tablet computers can allow a user to check e-mail, play games, view photos and videos, listen to music and surf the internet in addition to other things. Thus for the owner of a Tablet computer, who only spends 10% of his time using the tablet to read books, it would seem reasonable to assume that only 10% of the manufacturing impact of the tablet should be counted towards the impact of that users E-books. . . .

"As the number of printed books that the E-reader offsets increases, so do the benefits of that E-reader. At some point these gains offset the impact of manufacturing and using the E-reader. This “breakeven point” will be different for different metrics of environmental performance but for most it is likely somewhere between 30 and 70 printed books that are offset over the lifetime of the E-reader. For greenhouse gas emissions this number is probably between 20 and 35 books while for measures of human health impacts the number is probably closer to 70 books. In assessing the impact of an E-reader the idea of printed books offset must be carefully considered. As mentioned above, if the owner of an E-reader reads more books because of the ease and convenience of downloading a new title, then every book read on the device does not necessarily correspond to a printed book that is offset. Additionally the numbers in the figure above are based on a very simple comparison that is not likely to be replicated in the real world. The assumption is that the reader would either purchase a new printed book once and not share it with anyone else or that the reader would read the books on an E-reader and only use the E-reader for reading books. If a person would normally share a printed book with others, buy some used printed books, or borrow many of the printed books from the library then the numbers would need to be adjusted to account for that. Additionally, if the E-reader is used for other activities such as watching video, browsing the internet, checking email, or reading magazines and newspapers, it is unfair to assign the full impacts of producing the E-reader to E-books. More research is needed on typical user behavior in terms of time spent reading E-books verses other activities on E-readers and Tablet computers in order to make a more accurate comparison. If the trend of the iPad stealing market share from the Kindle continues, it seems likely that users will spend more time on the other activates that tablets like the iPad are optimized for. Additionally, if someone already owns a tablet computer or an E-reader, the marginal impact of downloading and reading an additional book is quite small. Thus for someone who already owns a device capable of reading E-books, the best choice from an environmental perspective would likely be to read a new book on that device."

View Map + Bookmark Entry

A 3D Printer Kit for only $499 March 2011 – November 2012

In March 2011 MakerBot Industries, Brooklyn, New York, introduced the Thing-O-Matic, a 3-D (3D) printer kit selling for $1,299.

Little more than a year later the Portabee 3D printer, the "first conveniently portable 3D printer," manufactured in Singapore, was available for $499. "It is easily collapsible in a matter of seconds and fits into a laptop bag." 

3D printing, a type of additive manufacturing, evolved from from rapid prototyping, which began in the 1980s. 

View Map + Bookmark Entry

Koomey’s Law of Electrical Efficiency in Computing March 2011

Energy and environmental scientist Jonathan Koomey of Stanford University, and Stephen Berard, Maria Sanchez, and Henry Wong published “Implications of Historical Trends in the Electrical Efficiency of Computing” Annals of the History of Computing, 33, no. 3, 46-54. This historical paper was highly unusual for its enunciation of a predictive trend in computing technology labeled by the press as “Koomey’s Law.”

“Koomey’s law describes a long-term trend in the history of computing hardware. The number of computations per joule of energy dissipated has been doubling approximately every 1.57 years. This trend has been remarkably stable since the 1950s (R2 of over 98%) and has actually been somewhat faster than Moore’s law. Jon Koomey articulated the trend as follows: ‘at a fixed computing load, the amount of battery you need will fall by a factor of two every year and a half.’

Because of Koomey’s law, the amount of battery needed for a fixed computing load will fall by factor of 100 every decade. As computing devices become smaller and more mobile, this trend may be even more important than improvements in raw processing power for many applications. Furthermore, energy costs are becoming an increasingly important determinant of the economics of data centers, further increasing the importance of Koomey’s law” (Wikipedia article on Koomey's Law accessed 11-19-2011).

View Map + Bookmark Entry

An Interactive Map of the Internet Later Produced as an iPhone App March 2011 – March 2013

In March 2011 peer1 hosting (peer1.com), headquartered in Vancouver, B. C., issued The Map of the Internet, a visual representation of all the networks around the world that were interconnected to form the Interent. These included small and large Interent service providers (ISPs), Internet exchange points, university networks, and organization entworks such as Facebook and Google. The size of the nodes and the thickness of the interconnecting lines reflected the size of particular providers in relation to one another. This map was produced as a two-dimension poster that could be downloaded from their website.

"Geek Version – You’re looking at all the autonomous systems that make up the Internet. Each autonomous system is a network operated by a single organization, and has routing connections to some number of neighboring autonomous systems. The image depicts a graph of 19,869 autonomous system nodes, joined by 44,344 connections. The sizing and layout of the autonomous systems are based on their eigenvector centrality, which is a measure of how central to the network each autonomous system is: an autonomous system is central if it is connected to other autonomous systems that are central. This is the same graph-theoretical concept that forms the basis of Google’s PageRank algorithm. The Map of the Internet image layout begins with the most central nodes and proceeds to the least, positioning them on a grid that subdivides after each order of magnitude of centrality. Within the constraints of the current subdivision level, nodes are placed as near as possible to previously-placed nodes that they are connected to" (http://www.peer1.com/blog/2011/03/map-of-the-internet-2011, accessed 03-14-2013).

Two years later, in March 2013, peer1 issued their Map of the Internet as a free iPhone app.  This visually distinctive and beautiful interactive app, based on data provided by The Cooperative Association for Internet Data Analysis (caida) allowed users to:

• Zoom and pan to enlarge or rotate the map in 3D

• Tap on nodes to learn more about them

• Browse historical data and events that shaped the Internet

• Perform a traceroute to a node from your network

• Search for companies or domains

• Change views to see geographic or hierarchical maps

View Map + Bookmark Entry

In its First Year Apple's iBookstore Sold 100,000,000 Books March 2, 2011

In March 2010 Steve Jobs announced that 100 million ibooks (ebooks) were downloaded since the company introduced its iBookstore in one year earlier.

View Map + Bookmark Entry

The Impact of Automation on Legal Research March 4, 2011

"Armies of Expensive Lawyers Replaced by Cheaper Software," an article by John Markoff published in The New York Times, discussed the use of "e-discovery" (ediscovery) software which uses artificial intelligence to analyze millions of electronic documents from the linguistic, conceptual and sociological standpoint in a fraction of the time and at a fraction of the cost of the hundreds of lawyers previously required to do the task.

"These new forms of automation have renewed the debate over the economic consequences of technological progress.  

"David H. Autor, an economics professor at the Massachusetts Institute of Technology, says the United States economy is being 'hollowed out.' New jobs, he says, are coming at the bottom of the economic pyramid, jobs in the middle are being lost to automation and outsourcing, and now job growth at the top is slowing because of automation.  

" 'There is no reason to think that technology creates unemployment,' Professor Autor said. 'Over the long run we find things for people to do. The harder question is, does changing technology always lead to better jobs? The answer is no.'

"Automation of higher-level jobs is accelerating because of progress in computer science and linguistics. Only recently have researchers been able to test and refine algorithms on vast data samples, including a huge trove of e-mail from the Enron Corporation. 

“ 'The economic impact will be huge,' said Tom Mitchell, chairman of the machine learning department at Carnegie Mellon University in Pittsburgh. 'We’re at the beginning of a 10-year period where we’re going to transition from computers that can’t understand language to a point where computers can understand quite a bit about language.'

View Map + Bookmark Entry

The Impact of Artificial Intelligence and Automation on Jobs March 6, 2011

In an op-ed column called Degrees and Dollars published in The New York Times Nobel Prize winning economist Paul Krugman of Princeton wrote concerning the impact of artificial intelligence and automation on jobs:

"The fact is that since 1990 or so the U.S. job market has been characterized not by a general rise in the demand for skill, but by “hollowing out”: both high-wage and low-wage employment have grown rapidly, but medium-wage jobs — the kinds of jobs we count on to support a strong middle class — have lagged behind. And the hole in the middle has been getting wider: many of the high-wage occupations that grew rapidly in the 1990s have seen much slower growth recently, even as growth in low-wage employment has accelerated."

"Some years ago, however, the economists David Autor, Frank Levy and Richard Murnane argued that this was the wrong way to think about it. Computers, they pointed out, excel at routine tasks, “cognitive and manual tasks that can be accomplished by following explicit rules.” Therefore, any routine task — a category that includes many white-collar, nonmanual jobs — is in the firing line. Conversely, jobs that can’t be carried out by following explicit rules — a category that includes many kinds of manual labor, from truck drivers to janitors — will tend to grow even in the face of technological progress.  

"And here’s the thing: Most of the manual labor still being done in our economy seems to be of the kind that’s hard to automate. Notably, with production workers in manufacturing down to about 6 percent of U.S. employment, there aren’t many assembly-line jobs left to lose. Meanwhile, quite a lot of white-collar work currently carried out by well-educated, relatively well-paid workers may soon be computerized. Roombas are cute, but robot janitors are a long way off; computerized legal research and computer-aided medical diagnosis are already here.

"And then there’s globalization. Once, only manufacturing workers needed to worry about competition from overseas, but the combination of computers and telecommunications has made it possible to provide many services at long range. And research by my Princeton colleagues Alan Blinder and Alan Krueger suggests that high-wage jobs performed by highly educated workers are, if anything, more “offshorable” than jobs done by low-paid, less-educated workers. If they’re right, growing international trade in services will further hollow out the U.S. job market."

View Map + Bookmark Entry

The Largest Interior Image: The Strahov Monastery Library March 29, 2011

360cities.net posted a 40 gigabyte panorama of the baroque Philosophical Hall containing 42,000 volumes in the Strahov Monastery Library in Prague.  

The spectacular image is particularly useful since tourists visiting the monastery may only glimpse this library room from one roped-off entrance. When the image was posted on YouTube and on 360cities.net it was the largest interior panoramic image taken to date, showing all aspects of the room in the smallest detail.

♦ An article published in Wired magazine on March 29, 2011 provided production details, multiple images, and a video showing how the panorama was created.

View Map + Bookmark Entry

A Program for Signing and Inscribing Ebooks April 2011

Author and inventor T. J. Waters developed a program for signing and inscribing ebooks called autography.  Because the autography inscription is sent over the Internet the inscription can be done remotely or in person.

View Map + Bookmark Entry

Walmart Buys Kosmix.com, Forming @WalmartLabs April 18, 2011

Wal-Mart, the world’s largest retailer, agreed to buy Kosmix.com, a social media start-up focused on ecommerce, creating @WalmartLabs.

"Eric Schmidt famously observed that every two days now, we create as much data as we did from the dawn of civilization until 2003. A lot of the new data is not locked away in enterprise databases, but is freely available to the world in the form of social media: status updates, tweets, blogs, and videos.

"At Kosmix, we’ve been building a platform, called the Social Genome, to organize this data deluge by adding a layer of semantic understanding. Conversations in social media revolve around 'social elements' such as people, places, topics, products, and events. For example, when I tweet 'Loved Angelina Jolie in Salt,' the tweet connects me (a user) to Angelia Jolie (an actress) and SALT (a movie). By analyzing the huge volume of data produced every day on social media, the Social Genome builds rich profiles of users, topics, products, places, and events. The Social Genome platform powers the sites Kosmix operates today: TweetBeat, a real-time social media filter for live events; Kosmix.com, a site to discover content by topic; and RightHealth, one of the top three health and medical information sites by global reach. In March, these properties together served over 17.5 million unique visitors worldwide, who spent over 5.5 billion seconds on our services.

"Quite a few of us at Kosmix have backgrounds in ecommerce, having worked at companies such as Amazon.com and eBay. As we worked on the Social Genome platform, it became apparent to us that this platform could transform ecommerce by providing an unprecedented level of understanding about customers and products, going well beyond purchase data. The Social Genome enables us to take search, personalization and recommendations to the next level.

"That’s why we were so excited when Walmart invited us to share with them our vision for the future of retailing. Walmart is the world’s largest retailer, with 10.5 billion customer visits every year to their stores and 1.5 billion online – 1 in 10 customers around the world shop Walmart online, and that proportion is growing. More and more visitors to the retail stores are armed with powerful mobile phones, which they use both to discover products and to connect with their friends and with the world. It was very soon apparent that the Walmart leadership shared our vision and our enthusiasm. And so @WalmartLabs was born. . . .

"We are at an inflection point in the development of ecommerce. The first generation of ecommerce was about bringing the store to the web. The next generation will be about building integrated experiences that leverage the store, the web, and mobile, with social identity being the glue that binds the experience. Walmart’s enormous global reach and incredible scale of operations -- from the United States and Europe to growing markets like China and India -- is unprecedented. @WalmartLabs, which combines Walmart’s scale with Kosmix’s social genome platform, is in a unique position to invent and build this future" (http://walmartlabs.blogspot.com/search?updated-max=2011-11-30T21:01:00-08:00&max-results=7, accessed 01-20-2012).

View Map + Bookmark Entry

Amazon to Launch Library Lending for eBooks on the Kindle Platform April 20, 2011

"Amazon today announced Kindle Library Lending, a new feature launching later this year that will allow Kindle customers to borrow Kindle books from over 11,000 libraries in the United States. Kindle Library Lending will be available for all generations of Kindle devices and free Kindle reading apps.  

" 'We're excited that millions of Kindle customers will be able to borrow Kindle books from their local libraries,' said Jay Marine, Director, Amazon Kindle. 'Customers tell us they love Kindle for its Pearl e-ink display that is easy to read even in bright sunlight, up to a month of battery life, and Whispersync technology that synchronizes notes, highlights and last page read between their Kindle and free Kindle apps.'

"Customers will be able to check out a Kindle book from their local library and start reading on any Kindle device or free Kindle app for Android, iPad, iPod touch, iPhone, PC, Mac, BlackBerry, or Windows Phone. If a Kindle book is checked out again or that book is purchased from Amazon, all of a customer's annotations and bookmarks will be preserved.  

" 'We're doing a little something extra here,' Marine continued. 'Normally, making margin notes in library books is a big no-no. But we're extending our Whispersync technology so that you can highlight and add margin notes to Kindle books you check out from your local library. Your notes will not show up when the next patron checks out the book. But if you check out the book again, or subsequently buy it, your notes will be there just as you left them, perfectly Whispersynced.'

"With Kindle Library Lending, customers can take advantage of all of the unique features of Kindle and Kindle books, including: //Paper-like Pearl electronic-ink display

◊ No glare even in bright sunlight

◊ Lighter than a paperback - weighs just 8.5 ounces and holds up to 3,500 books

◊ Up to one month of battery life with wireless off Read everywhere with free Kindle apps for Android, iPad, iPod touch, iPhone, PC, Mac, BlackBerry and Windows Phone Whispersync technology wirelessly sync your books, notes, highlights, and last page read across Kindle and free Kindle reading apps

◊ Real Page Numbers - easily reference passages with page numbers that correspond to actual print editions Amazon is working with OverDrive, the leading provider of digital content solutions for over 11,000 public and educational libraries in the United States, to bring a seamless library borrowing experience to Kindle customers.

"We are excited to be working with Amazon to offer Kindle Library Lending to the millions of customers who read on Kindle and Kindle apps," said Steve Potash, CEO, OverDrive. "We hear librarians and patrons rave about Kindle, so we are thrilled that we can be part of bringing library books to the unparalleled experience of reading on Kindle."  

"Kindle Library Lending will be available later this year for Kindle and free Kindle app users." (http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1552678&highlight, accessed 04-20-2011)

View Map + Bookmark Entry

Microsoft Acquires Skype for $8.5 Billion May 2011

In its acquisition of Skype for $8.5 billion Microsoft acquired a company founded in 2003, which never made money, changed hands many times, and came with substantial debt. 

The purchase price was roughly ten times the $860 million revenue of the company in 2010. Skype's debt was $686 million — not a problem for Microsoft.

Microsoft paid such a premium for the company because at the time of purchase Skype was growing at the rate of 500,000 new registered users per day, had 170 million connected users, with 30 million users communicating on the Skype platform concurrently. Volume of communications over the platform totaled 209 billion voice and video minutes in 2010.

"Services like Skype can cut into the carriers’ revenues because they offer easy ways to make phone calls, videoconference and send messages free over the Internet, encroaching on the ways that phone companies have traditionally made money" (http://www.nytimes.com/2011/05/16/technology/16phone.html?hpw, accessed 05-16-2011).

View Map + Bookmark Entry

In May 2011 Netflix was the Largest Source of Internet Traffic in North America May 2011

In May 2011 video streaming company Netflix, headquartered in Los Gatos, California, was the largest source of Internet traffic in North America, accounting for 29.7 percent of peak downstream traffic. The company was also the largest overall source of Internet traffic.

"Currently, real-time entertainment applications consume 49.2 percent of peak aggregate traffic - up from 29.5 percent in 2009. And the company forecasts that the category will account for as much as 60 percent of peak aggregate traffic by the end of this year.

"And in Europe, the figure's even higher. Overall, individual subscribers in Europe consume twice the amount of data as North Americans" (http://www.tgdaily.com/games-and-entertainment-features/56015-netflix-becomes-biggest-source-of-internet-traffic, accessed 05-18-2011). 

View Map + Bookmark Entry

The Saint John's Bible is Completed May 2011

The Saint John's Bible, first complete illuminated manuscript of the Bible commissioned by a Benedictine Monastery since the invention of printing by movable type, was completed for St. John's Abbey and University in May 2011.

Commissioned in 1998 by monks at St. John’s Abbey and University in Collegeville, Minnesota to celebrate the beginning of a new millennium,  The St. John's Bible was collaboration between a group of scriptural scholars and theologians, and a team of artists and calligraphers under the supervision of artistic director and master calligrapher Donald Jackson, Senior Scribe to Her Majesty Queen Elizabeth's Crown Office at the House of Lords, whose lifelong dream was to create an illuminated Bible.

Donald Jackson and his team wrote and illuminated The St. John's Bible on nearly 1150 large folio pages of vellum with 160 illuminations. The page openings of the manuscript are two feet high by three feet wide. The project was produced in a scriptorium in Monmouth, Wales, using quills hand-cut from goose, swan, and turkey feathers, and paints hand-ground from precious minerals and stones such as lapis lazuli, malachite, silver, and 24-karat gold. To complete the project Jackson wrote the entire book of Revelation himself, without the assistance of additional scribes.

The complete Bible was bound in seven folio volumes, with some volumes weighing as much as 35 pounds.

In a blending of medieval and 21st century technology page-layout programs were used to plan the layout of the Bible and define line breaks for the text.

The biblical text chosen was the New Revised Standard Version translation of the Bible because its predecessor, the Revised Standard Version, was officially authorized for use by most Christian churches, whether Protestant, Anglican, Roman Catholic or Eastern Orthodox.

Distinguishing this 21st century illuminated manuscript of the bible from its medieval predecessors, various printed reproductions of the manuscript were published as each manuscript volume was completed. The reproductions exploited the finest available printing technology. The printed reproductions included an Apostles Edition, limited to only 12 sets, a limited, numbered and signed full-size exact facsimile Heritage Edition, and a reduced-size trade edition, complete in seven small folio volumes.

View Map + Bookmark Entry

McKinsey Report on the Impact of the Internet on Growth, Jobs, and Prosperity May 2011

 McKinsey research into the Internet economies of the G-8 nations as well as Brazil, China, India, South Korea, and Sweden found that the web accounted for a significant and growing portion of global GDP. If measured as a sector, Internet-related consumption and expenditure were bigger than agriculture or energy. On average, the Internet contributed 3.4 percent to GDP in the 13 countries covered by the research—an amount the size of Spain or Canada in terms of GDP, and growing at a faster rate than that of Brazil.

"Research prepared by the McKinsey Global Institute and McKinsey's Technology, Media and Telecommunications Practices as part of a knowledge partnership with the e-G8 Forum, offers the first quantitative assessment of the impact of the Internet on GDP and growth, while also considering the most relevant tools governments and businesses can use to get the most benefit from the digital transformation. To assess the Internet's contribution to the global economy, the report analyzes two primary sources of value: consumption and supply. The report draws on a macroeconomic approach used in national accounts to calculate the contribution of GDP; a statistical econometric approach; and a microeconomic approach, analyzing the results of a survey of 4,800 small and medium-size enterprises in a number of different countries.  

"The Internet's impact on global growth is rising rapidly. The Internet accounted for 21 percent of GDP growth over the last five years among the developed countries MGI studied, a sharp acceleration from the 10 percent contribution over 15 years. Most of the economic value created by the Internet falls outside of the technology sector, with 75 percent of the benefits captured by companies in more traditional industries. The Internet is also a catalyst for job creation. Among 4,800 small and medium-size enterprises surveyed, the Internet created 2.6 jobs for each lost to technology-related efficiencies.

"The United States is the largest player in the global Internet supply ecosystem, capturing more than 30 percent of global Internet revenues and more than 40 percent of net income. It is also the country with the most balanced structure within the global ecosystem among the 13 countries studied, garnering relatively equal contributions from hardware, software and services, and telecommunications. The United Kingdom and Sweden are changing the game, in part driven by the importance and the performance of their telecom operators. India and China are strengthening their position in the global Internet ecosystem rapidly with growth rates of more than 20 percent. France, Canada, and Germany have an opportunity to leverage their strong Internet usage to increase their presence in the supply ecosystem. Other Asian countries are rapidly accelerating their influence on the Internet economy at faster rates than Japan. Brazil, Russia and Italy are in the early stages of Internet supply. They have strong potential for growth" (http://www.mckinsey.com/Insights/MGI/Research/Technology_and_Innovation/Internet_matters, accessed 01-19-2012).

View Map + Bookmark Entry

The First Major Print Magazine Publisher to Offer iPad Subscriptions May 9, 2011

Condé Nast, publisher of The New Yorkerbecame the first major print magazine publisher to begin a subscription plan on the iPad for one of its magazines. Previously readers on the iPad had to download each issue separately.

"And the iPad subscription offer is quite aggressive: $5.99 for one month (for four issues) and $59.99 for a full year. But even more surprising, a bundled version of print and digital subscriptions, is available for $6.99 a month, or $69.99 a year. (Current print subscribers can sign in to the iPad version at no additional charge.)

"Subscriptions on the iPad to The New Yorker went on sale early Monday, and subscriptions for other Condé Nast magazines, including Vanity Fair, Glamour, Golf Digest, Allure, Wired, Self and GQ, will become available in the coming weeks. The Condé Nast-Apple deal was first reported in The New York Post last week."

"Condé Nast has traditionally gotten its magazines in the hands of consumers at a cheap price in the hopes of building up big rate bases, the number used to sell advertisers, and the deal with Apple is consistent with that advertising-first approach. Over time, the new tablet subscribers could be a boon to advertising now that the Audit Bureau of Circulations has ruled that digital subscribers can be counted toward the rate base. The bundled subscriptions could also help protect the legacy business by giving a boost to print subscriptions while selling many more digital ones — young people and international consumers are a particular target."

"It will come at a price. Although Condé Nast can sell digital subscriptions on its own Web sites, the vast majority of sales will take place in the Apple App Store, where nearly a third of the price will go to Apple (specific terms were not disclosed). In addition, the consumer data derived from app store sales will belong to Apple and shared as the company sees fit, although Mr. Cue said that “magazine publishers will know a lot more about subscribers on the iPad than they ever did about print subscribers.”  

"By teaming with Apple, Condé Nast and other publishers gain access to a database of 200 million credit card holders and a sales environment where billions of songs and millions of apps have already been sold. But the music industry lesson is one that is not lost on publishing. Apple may have “saved” the music industry, but it is a much smaller business with little control over its pricing" (http://www.nytimes.com/2011/05/10/business/media/10conde.html?src=rechp, accessed 05-10-2011). 

View Map + Bookmark Entry

The First Large Robotized Library May 16, 2011

The Joe and Rika Mansueto Library at the University of Chicago may be the first large library to employ robotized storage underground.  The above-ground structure of the library— an elongated dome— is also highly distinctive. 

Physical books marked with bar codes are placed in bins manipulated and stored underground by robotic systems. This enables far more compact storage of physical volumes than would be possible if the books were stored on shelves in library stacks and paged by humans. The robotic system at the Mansueto Library is designed to store 3.5 million volumes.

Because all the storage portion of the Mansueto Library is underground the University of Chicago was able to build the new library next to the Regenstein Library in a way that did not disrupt the openness of the existing space between buildings, and to store a very large number of physical volumes on campus where they are used rather than in a off-site storage facility.

View Map + Bookmark Entry

"Print isn't dead, says Bowker's Annual Book Production Report May 18, 2011

" . . . Bowker is projecting that despite the popularity of e-books, traditional U.S. print title output in 2010 increased 5%. Output of new titles and editions increased from 302,410 in 2009 to a projected 316,480 in 2010. The 5% increase comes on the heels of a 4% increase the previous year based on the final 2008-2009 figures.

"The non-traditional sector continues its explosive growth, increasing 169% from 1,033,065 in 2009 to an amazing 2,776,260 in 2010. These books, marketed almost exclusively on the web, are largely on-demand titles produced by reprint houses specializing in public domain works and by presses catering to self-publishers and 'micro-niche' publications.

“ 'These publication figures from both traditional and non-traditional publishers confirm that print production is alive and well, and can still be supported in this highly dynamic marketplace,' said Kelly Gallagher, vice president of publishing services for Bowker. 'Especially on the non-traditional side, we’re seeing the reprint business’ internet-driven business model expand dramatically. It will be interesting to see in the coming years how well it succeeds in the long-term.'

"In traditional publishing, SciTech continues to drive growth

"Continuing the trend seen last year, science and technology were the leading areas of growth as consumers purchased information for business and careers. Major increases were seen in Computers (51% over 2009, with an average five-year growth rate of 8%), Science (37% over 2009, with an average five-year growth rate of 12%) and Technology (35% over 2009, with an average five-year growth rate of 11%). Categories subject to discretionary spending were the top losers, perhaps still feeling the effects of a sluggish economy. Literature (-29%), Poetry (-15%), History (-12), and Biography (-12%) all recorded double digit declines. Fiction, which is still the largest category (nearly 15% of the total) dropped 3% from 2009, continuing a decline from peak output in 2007. Religion (-4%) fell to 5th place behind Science among the largest categories" (http://www.bowker.com/index.php/press-releases/633-print-isnt-dead-says-bowkers-annual-book-production-report, accessed 07-29-2011).

View Map + Bookmark Entry

Ebooks Outsell Physical Books on Amazon.com May 19, 2011

Since April 1, 2011 Amazon reported that it sold 105 books for its Kindle ebook (e-book) reader for every 100 hardcover and paperback physical books.

At this time ebook sales represented 14% of of all general consumer fiction and nonfiction books sold, according to Forrester Research.

Amazon introduced the Kindle on November 19, 2007.

View Map + Bookmark Entry

"Turn on, Tune in, Drop Out": The New York Public Library Buys the Timothy Leary Papers June 2011

The New York Public Library purchased the library and papers of psychologist, writer and "psychodelic explorer" Timothy Leary for $900,000.

"A hugely controversial figure during the 1960s and 1970s, he defended the use of the drug LSD for its therapeutic, emotional and spiritual benefits, and believed it showed incredible potential in the field of psychiatry. Leary also popularized the phrase 'Turn on, tune in, drop out'. Both proved to be hugely influential on the 1960s counterculture. Largely due to his influence in this field, he was attacked by conservative figures in the United States, and described as 'the most dangerous man in America' by President Richard Nixon" (Wikipedia article on Timothy Leary, accessed 06-16-2011).

That an establishment institution would acquire Leary's papers at a significant cost shows both the insight of the acquisitions committee of the institution and the relatively short time in which it took society to appreciate Leary's central role in the historically significant aspects of the 1960s-1970s counterculture.

View Map + Bookmark Entry

The Expanding Digital Universe: Surpassing 1.8 Zetabytes June 2011

John F. Gantz and David Reinsell of International Data Corporation (IDC) published a summary of their annual study of the digital universe on the fifth anniversary of their study:

"We always knew it was big – in 2010 cracking the zettabyte barrier. In 2011, the amount of information created and replicated will surpass 1.8 zettabytes (1.8 trillion gigabytes) - growing by a factor of 9 in just five years.

"But, as digital universe cosmologists, we have also uncovered a number of other things — some predictable, some astounding, and some just plain disturbing.

"While 75% of the information in the digital universe is generated by individuals, enterprises have some liability for 80% of information in the digital universe at some point in its digital life. The number of "files," or containers that encapsulate the information in the digital universe, is growing even faster than the information itself as more and more embedded systems pump their bits into the digital cosmos. In the next five years, these files will grow by a factor of 8, while the pool of IT staff available to manage them will grow only slightly. Less than a third of the information in the digital universe can be said to have at least minimal security or protection; only about half the information that should be protected is protected.

"The amount of information individuals create themselves — writing documents, taking pictures, downloading music, etc. — is far less than the amount of information being created about them in the digital universe.

"The growth of the digital universe continues to outpace the growth of storage capacity. But keep in mind that a gigabyte of stored content can generate a petabyte or more of transient data that we typically don't store (e.g., digital TV signals we watch but don't record, voice calls that are made digital in the network backbone for the duration of a call).  

"So, like our physical universe, the digital universe is something to behold — 1.8 trillion gigabytes in 500 quadrillion "files" — and more than doubling every two years. That's nearly as many bits of information in the digital universe as stars in our physical universe" (http://idcdocserv.com/1142, accessed 08-09-2011).

♦ In August 2011 a video presentation of John Gantz delivering his summary speech was available at this link: http://www.emc.com/collateral/demos/microsites/emc-digital-universe-2011/index.htm

View Map + Bookmark Entry

FaceBook Serves a Trillion Page Views in June 2011 June 2011

According to Google's doubleclick ad planner list of The 1000 most visited sites on the web, Facebook, the most visited website in the world, served 1 trillion page views to 860,000,000 unique visitors in June 2011.

View Map + Bookmark Entry

"Physical Archiving is Still an Important Function in the Digital Era."The Internet Archive Builds an Archive of Physical Books June 6, 2011

In one of the more ironic developments since the Internet, the Internet Archive is creating a Physical Archive in Richmond, California, of all books they scanned that they did not have to return to institutional libraries, and of other physical books as well. Their goal is to collect "one coy of every book." Their purposes in doing this are that the physical books are authentic and original versions that can be used in the future, and "If there is ever a controversy about the digital version, the original can be examined." The physical books are being being stored in the most compact archival fashion in environmentally controlled shipping containers placed in warehouses—not in the way an institutional library would store them if they had to provide regular access.

Brewster Kahle, founder of the Internet Archive explained the Physical Archive of the Internet Archive:

"Digital technologies are changing both how library materials are accessed and increasingly how library materials are preserved. After the Internet Archive digitizes a book from a library in order to provide free public access to people world-wide, these books go back on the shelves of the library. We noticed an increasing number of books from these libraries moving books to 'off site repositories'  to make space in central buildings for more meeting spaces and work spaces. These repositories have filled quickly and sometimes prompt the de-accessioning of books. A library that would prefer to not be named was found to be thinning their collections and throwing out books based on what had been digitized by Google. While we understand the need to manage physical holdings, we believe this should be done thoughtfully and well.  

"Two of the corporations involved in major book scanning have sawed off the bindings of modern books to speed the digitizing process. Many have a negative visceral reaction to the “butchering” of books, but is this a reasonable reaction?  

"A reason to preserve the physical book that has been digitized is that it is the authentic and original version that can be used as a reference in the future. If there is ever a controversy about the digital version, the original can be examined. A seed bank such as the Svalbard Global Seed Vault is seen as an authoritative and safe version of crops we are growing. Saving physical copies of digitized books might at least be seen in a similar light as an authoritative and safe copy that may be called upon in the future.  

"As the Internet Archive has digitized collections and placed them on our computer disks, we have found that the digital versions have more and more in common with physical versions. The computer hard disks, while holding digital data, are still physical objects. As such we archive them as they retire after their 3-5 year lifetime. Similarly, we also archive microfilm, which was a previous generation’s access format. So hard drives are just another physical format that stores information. This connection showed us that physical archiving is still an important function in a digital era.  

"There is also a connection between digitized collections and physical collections. The libraries we scan in, rarely want more digital books than the digital versions that we scan from their collections. This struck us as strange until we better understood the craftsmanship required in putting together great collections of books, whether physical or digital. As we are archiving the books, we are carefully recording with the physical book what the identifier for the virtual version, and attaching information to the digital version of where the physical version resides. 

"Therefore we have determined that we will keep a copy of the books we digitize if they are not returned to another library. Since we are interested in scanning one copy of every book ever published, we are starting to collect as many books as we can" (http://blog.archive.org/2011/06/06/why-preserve-books-the-new-physical-archive-of-the-internet-archive/, accessed 06-09-2011).

"Mr. Kahle had the idea for the physical archive while working on the Internet Archive, which has digitized two million books. With a deep dedication to traditional printing — one of his sons is named Caslon, after the 18th-century type designer — he abhorred the notion of throwing out a book once it had been scanned. The volume that yielded the digital copy was special.  

"And perhaps essential. What if, for example, digitization improves and we need to copy the books again?  

“ 'Microfilm and microfiche were once a utopian vision of access to all information,' Mr. Kahle noted, 'but it turned out we were very glad we kept the books' " (http://www.nytimes.com/2012/03/04/technology/internet-archives-repository-collects-thousands-of-books.html?nl=todaysheadlines&emc=tha25, accessed 03-30-2012).


View Map + Bookmark Entry

College Textbooks Make a Slower Transition from Print to Digital June 6, 2011

College textbooks are slower to make the transition from print to digital than autobiographies, murder mysteries, romance novels and self-help books.  

"Although sites like CourseSmart , a collective effort among the five biggest American academic publishers to offer digital content, have made e-textbooks widely available at prices that are as much as 60 percent lower than the print editions, sales have yet to catch up; e-textbooks made up only 2.8 percent of total U.S. textbook sales in 2010, according to the National Association of College Stores.  

"But a new study by the nonprofit arm of the Pearson Foundation shows that while 55 percent of students still prefer print over digital textbooks, among the 7 percent of students who own tablets devices like iPads, 73 percent prefer digital textbooks. With 70 percent of college students interested in owning a tablet, and 15 percent saying they plan to buy one in the next six months, the survey suggests that there may be a coming rise in the e-textbook market.  

"For the e-textbook market, the Pearson Foundation’s study represents an optimistic departure from data released in January by the Book Industry Study Group, or B.I.S.G., which concluded that 75 percent of students still prefer print textbooks to digital.  'Those B.I.S.G. statistics are really not surprising,' said Matt MacInnis, chief executive of Inkling, a San Francisco startup that has designed an application to help major academic publishers, like McGraw-Hill and Pearson, recreate their higher education textbooks for the iPad. In the process, Inkling has become the front-runner in the tablet-textbook market.  

“ 'Up until now, digital textbooks were a flat — no value-added PDF version of the print edition, so you’re basically asking students if they prefer an inferior product,' Mr. MacInnis said. 'So it’s no surprise students weren’t interested.' Inkling seeks to resolve the basic issues that many students have found problematic on PDF versions of textbooks, like the difficulty of highlighting and note-taking. The company’s textbooks also use audio, video and interactive features like quizzes and note-sharing tools to create content that Mr. MacInnis calls 'light years better than what you can get in print.'

"In the Inkling version of 'The Art of Public Speaking' by Stephen E. Lucas, students can listen to the top 100 American speeches, watch corresponding video, and follow along on a printed transcript. Students can zoom in on a photo in a biology textbook, and hit a play button to hear a symphony in a music textbook.  'We’re looking to redefine the medium – we’re rebuilding learning content from the ground up,' Mr. MacInnis said.  

"Inkling offers fewer than 30 titles, but by this autumn, Mr. MacInnis estimates that 100 titles will be ready for sale. Inkling’s titles are about 20 percent less expensive than a new edition of the print textbook, but they are also available by the chapter for $2.99.  

"As more companies enter the tablet textbook market, textbook prices will most likely decrease as competition heats up, said Osman Rashid, chief executive of Kno, a Silicon Valley startup that is to introduce a textbook application, also for the iPad, this week.  

“ 'We’re just at the beginning of this market’s potential,' he said. The company plans to expand its software soon to tablets powered by Android, the Google operating system.  

"Mr. Rashid, who was a co-founder of Chegg.com, the popular online textbook rental site, said he was unable to disclose details of the platform until its debut but said Kno has been working with the major academic publishers.  

"Whether publishers will distribute their top-selling titles to multiple tablet-textbook applications or whether it will be a winner-takes-all market remains to be seen. In the meantime, publishers are looking to enter the existing market.  

“ 'There’s really no telling where the market will go, or how this will play out — none of us know the answer, but the tablet is creating some really exciting opportunities,' said Jeff Shelstad, chief executive of Flat World Knowledge, a publisher of college textbooks.  

Under an open license model, Flat World’s textbooks can be edited by individual professors to fit a specific course. Flat World allows students to get the book online free, and offers other formats, like print and e-textbook for a fee. Flat World is now looking to redevelop their textbooks for the tablet market as well, and is in talks with Inkling.

“ 'In this industry, print has been the premium experience. In our model, it’s the degraded experience, and this means we’ll have an easier time translating into the tablet market,' Mr. Shelstad said.  

"Hal Plotkin, the senior policy advisor in the Office of the Under Secretary of Education in the Obama administration, says he thinks that the tablet textbook market is raising some interesting possibilities.  

“ 'There’s a long way to go,' he said, 'but this is getting a discussion going, and it’s widening the playing field in terms of content providers.'

"Mr. Plotkin said he expected to see a complete overhaul in 'educational delivery systems' over the next 15 years. 'Affordable technologies,' he said, 'are coming together with ubiquitously available, high-quality, teaching and learning resources to create a real revolution.' "(http://www.nytimes.com/2011/06/06/business/media/06iht-EDUCSIDE06.html?_r=1&scp=1&sq=digital%20textbooks%20slow%20to%20catch%20on&st=cse

View Map + Bookmark Entry

Digital Democracy is Not So Democratic June 10, 2011

"Anyone with Internet access can generate online content and influence public opinion, according to popular belief. But a new study from the University of California, Berkeley, suggests that the social Web is becoming more of a playground for the affluent than a digital democracy.

"Despite the proliferation of social media – with Twitter and Facebook touted as playing pivotal roles in such pro-democracy movements as the Arab Spring – the bulk of today’s blogs, websites and video-sharing sites represent the perspectives of college-educated, Web 2.0-savvy users, the study says.

“ 'Having Internet access is not enough. Even among people online, those who are digital producers are much more likely to have higher incomes and educational levels,' said Jen Schradie, a doctoral candidate in sociology at UC Berkeley and author of the study published in the May online issue of Poetics, a Journal of Empirical Research on Culture, the Media and the Arts. 

"Schradie, a researcher at the campus’s Berkeley Center for New Media, analyzed data from more than 41,000 American adults surveyed between 2000 and 2008 in the Pew Internet and American Life Project. She found that college graduates are 1.5 times more likely to be bloggers than are high school graduates; twice as likely to post photos and videos and three times more likely to post an online rating or comment.  

"Overall, the study found, less than 10 percent of the U.S. population is participating in most online production activities, and having a college degree is a greater predictor of who will generate publicly available online content than being young and white" (http://newscenter.berkeley.edu/2011/06/07/digital-democracy/, accessed 0612-2011).

♦ You can watch a video presentation by Jen Schradie on The Digital Production Gap on YouTube at this link: http://www.youtube.com/watch?v=-029CXbeOjY


View Map + Bookmark Entry

"Distant Reading" Versus "Close Reading" June 24, 2011

Journalist Kathryn Schultz began publishing a column called The Mechanic Muse in The New York Times on applications of computing technology to scholarship about literature. Her first column, titled "What is Distant Reading?", concerned work to date by Stanford English and Comparative Literature professor Franco Moretti and team at the Stanford Literary Lab.

"We need distant reading, Moretti argues, because its opposite, close reading, can’t uncover the true scope and nature of literature. Let’s say you pick up a copy of 'Jude the Obscure,' become obsessed with Victorian fiction and somehow manage to make your way through all 200-odd books generally considered part of that canon. Moretti would say: So what? As many as 60,000 other novels were published in 19th-century England — to mention nothing of other times and places. You might know your George Eliot from your George Meredith, but you won’t have learned anything meaningful about literature, because your sample size is absurdly small. Since no feasible amount of reading can fix that, what’s called for is a change not in scale but in strategy. To understand literature, Moretti argues, we must stop reading books.

"The Lit Lab seeks to put this controversial theory into practice (or, more aptly, this practice into practice, since distant reading is less a theory than a method). In its January pamphlet, for instance, the team fed 30 novels identified by genre into two computer programs, which were then asked to recognize the genre of six additional works. Both programs succeeded — one using grammatical and semantic signals, the other using word frequency. At first glance, that’s only medium-interesting, since people can do this, too; computers pass the genre test, but fail the 'So what?' test. It turns out, though, that people and computers identify genres via very different features. People recognize, say, Gothic literature based on castles, revenants, brooding atmospheres, and the greater frequency of words like 'tremble' and 'ruin.' Computers recognize Gothic literature based on the greater frequency of words like . . . 'the. Now, that’s interesting. It suggests that genres 'possess distinctive features at every possible scale of analysis.' More important for the Lit Lab, it suggests that there are formal aspects of literature that people, unaided, cannot detect.  

"The lab’s newest paper seeks to detect these hidden aspects in plots (primarily in Hamlet) by transforming them into networks. To do so, Moretti, the sole author, turns characters into nodes ('vertices' in network theory) and their verbal exchanges into connections ('edges'). A lot goes by the wayside in this transformation, including the content of those exchanges and all of Hamlet’s soliloquies (i.e., all interior experience); the plot, so to speak, thins. But Moretti claims his networks 'make visible specific ‘regions’ within the plot' and enable experimentation. (What happens to Hamlet if you remove Horatio?). . . ." (http://www.nytimes.com/2011/06/26/books/review/the-mechanic-muse-what-is-distant-reading.html?pagewanted=2, accessed 06-25-2011).

View Map + Bookmark Entry

News Corporation Sells MySpace for $545 Million Loss June 29, 2011

News Corporation sold social media website MySpace to advertising network Specific Media for "roughly $35 million." New Corporation purchased MySpace in 2006 for $580 million.

"The News Corporation, which is controlled by Rupert Murdoch, had been trying since last winter to rid itself of the unprofitable unit, which was a casualty of changing tastes and may be a cautionary tale for social companies like Zynga and LinkedIn that are currently enjoying sky-high valuations. . . .

"Terms of the deal were not disclosed, but the News Corporation said that it would retain a minority stake. Specific Media said it had brought on board the artist Justin Timberlake as a part owner and an active player in MySpace’s future, but said little else about how the site would change.  

"The sale closes a complex chapter in the history of the Internet and of the News Corporation, which was widely envied by other media companies when it acquired MySpace in 2005. At that time, MySpace was the world’s fastest-growing social network, with 20 million unique visitors each month in the United States. That figure soon soared to 70 million, but the network could not keep pace with Facebook, which overtook MySpace two years ago" (http://mediadecoder.blogs.nytimes.com/2011/06/29/news-corp-sells-myspace-to-specific-media-for-35-million/?hp, accessed 06-30-2011).

View Map + Bookmark Entry

IBM Announces Phase-Change Memory June 30, 2011

IBM announced that it produced phase-change memory (PCM) chips that could store two bits of data per cell without data corruption problems over extended periods of time. This significant improvement advanced the development of low-cost, faster and more durable memory applications for consumer devices, including mobile phones and cloud storage, as well as high-performance applications, such as enterprise data storage.

"With a combination of speed, endurance, non-volatility and density, PCM can enable a paradigm shift for enterprise IT and storage systems within the next five years. Scientists have long been searching for a universal, non-volatile memory technology with far superior performance than flash – today’s most ubiquitous non-volatile memory technology. The benefits of such a memory technology would allow computers and servers to boot instantaneously and significantly enhance the overall performance of IT systems. A promising contender is PCM that can write and retrieve data 100 times faster than flash, enable high storage capacities and not lose data when the power is turned off. Unlike flash, PCM is also very durable and can endure at least 10 million write cycles, compared to current enterprise-class flash at 30,000 cycles or consumer-class flash at 3,000 cycles. While 3,000 cycles will out live many consumer devices, 30,000 cycles are orders of magnitude too low to be suitable for enterprise applications" (http://www.zurich.ibm.com/news/11/pcm.html, accessed 07-01-2011).

Like high-density NAND flash memory used in solid state drives (SSDs). phase-change memory is nonvolatile.  However, unlike NAND flash, PCM memory does not require existing data be marked for deletion prior to having new data written to it — a process known to as an erase-write cycle. Erase-write cycles slow NAND flash performance and, over time, wear it out, giving it a lifespan that ranges from 5,000 to 10,000 write cycles in consumer products, and up to 100,000 cycles in enterprise-class products.

"As organizations and consumers increasingly embrace cloud-computing models and services, ever more powerful and efficient, yet affordable storage technologies are needed, according to Haris Pozidis, manager of memory and probe technologies at IBM Research" (http://www.computerworld.com/s/article/9218031/IBM_announces_computer_memory_breakthrough?source=CTWNLE_nlt_wktop10_2011-07-01, accessed 07-01-2011).

View Map + Bookmark Entry

200 Million Tweets Per Day: 100 Fold Increase Since 2009 June 30, 2011

"Halfway through 2011, users on Twitter are now sending 200 million Tweets per day. For context on the speed of Twitter’s growth, in January of 2009, users sent two million Tweets a day, and one year ago they posted 65 million a day" (http://blog.twitter.com/2011/06/200-million-tweets-per-day.html).

View Map + Bookmark Entry

South Korea to Shift All Primary and High School Textbooks to Digital by 2015 July 2011

"South Korea’s Education Ministry announced last week that it plans to replace all printed textbooks with digital versions in the next four years. It’s part of a larger effort to integrate technology into all aspects of the South Korean education system, including moving all nationwide academic exams online and offering more online classes.  

"The Education Ministry says that it plans to have elementary-level content digitized by 2014, with high school level content ready by 2015.

"But making textbooks available in an electronic format isn’t a simple undertaking. Nor is it as easy as just offering digital versions of existing books. All of the supplementary material that often accompanies textbooks — handouts, quizzes, study guides, and so on — must also be digitized. A move to e-textbooks opens opportunities for new kinds of content as well, with more multimedia and interactivity available.  

"But there are also new challenges: how will this material be stored? Which format will it be offered? Will it be accessible to all students? What infrastructure needs to be in place — for schools, for teachers, and for students — to make sure that print textbooks really can be replaced? According to Chosunilbo, the government wants to build a cloud-based computing system for all schools that will store a massive database of all digital textbooks. It also plans to help boost the WiFi infrastructure there, so that students and teachers can all access and download the materials. Furthermore the government says it will give tablets to low-income students.  

" 'We don’t expect the shift to digital textbooks to be difficult as students today are very accustomed to the digital environment,”said an Education Ministry official in the Chosunilbo article.  

"As we covered on MindShift earlier this year, South Korea has been on the cutting edge with adoption of a number of educational technologies and is experimenting with telepresence and robot instructors. As e-schoolnews points out, students in South Korea have scored higher than those in any other country on the Digital Reading Assessment exam — part of the OECD’s Program for International Student Assessment (PISA) tests. The Digital Reading Assessment exam measures students’ ability to use and critically evaluate Web-based sources.  

"The commitment on the part of South Korea to go digital with its textbooks is part of a growing trend. The State of Florida, for example, has also expressed its interest in moving to a paperless classroom, and California is moving in that direction, as are other states. The South Korean plan will involve some W2.2 trillion (approximately $2.1 billion) of investment — a hefty price tag for a school system that is already more 'wired' than many U.S. classrooms. That begs the question, of course, as to the realities of which education systems will be able to follow the South Korean initiative" (http://mindshift.kqed.org/2011/07/south-korean-schools-go-paperless-can-others-follow/, accessed 11-20-2011).

View Map + Bookmark Entry

Construction of the Francis Crick Institute Begins July 2011

In July 2011 construction began for the The Francis Crick Institute (formerly the UK Centre for Medical Research and Innovation), a biomedical research center in London. The Institute is a partnership between Cancer Research UK, Imperial College London, King's College London, the Medical Research Council, University College London (UCL) and the Wellcome Trust. It will be the largest center for biomedical research and innovation in Europe.

The Francis Crick Institute, named after British molecular biologist, biophysicist, and neuroscientist Francis Crick, will be located in a new state-of-the-art 79,000 square meters facility next to St Pancras railway station in the Camden area of Central London. It is expected that researchers will to be able to start work in 2015. Complete cost of the facility is budgeted at approximately £600 million. The institute is expected to employ 1500 people, including 1,250 scientists, with an annual budget of over £100 million. 

View Map + Bookmark Entry

Leading British Tabloid Closed Because of Cell Phone Hacking Scandal July 7 – July 17, 2011

News Corporation announced that the English tabloid and Britain's largest circulation newspaper, News of the World, founded in 1843, would close on July 10, 2011 in the wake of an unprecedented cell phone hacking scandal. 

Among the disclosures were that News of the World paid £100,000 in bribes to certain London Metropolitan Police officers to suppress allegations, and that after the scandal broke the Metropolitan Police were sifting through 11,000 pages of documents containing the names of 4,000 people whose phones may have been hacked.  The final blows to the tabloid were revelations by investigative reporters at The Guardian newspaper that the News of the World intercepted voicemails left on a phone belonging to murdered schoolgirl Milly Dowler and the news that the paper targeted the phones of families of victims of the bombings in London on July 7, 2007 (7/7)

On July 7, 2011 ProPublica.org published "Our Reader's Guide to the Phone Hacking Scandal."

On July 7, 2011 Guardian.co.uk published an interactive timeline on the scandal from its origins in 2005 till the announcement of the closure today.

"How the saga unfolded – from suspicions that Prince William's messages were being listened to, to calls for a public inquiry, the hacking of murdered schoolgirl Milly Dowler's voicemail and James Murdoch's closure of the News of the World"

Sometimes nicknamed "News of the Screws" and "Screws of the World," for its coverage of scandals, News of the World was among the world's most popular print publications. According to the Wikipedia, print sales of the tabloid, which appeared weekly on Sundays, averaged 2,812,005 copies per week in October 2010.

The July 8, 2011 issue of The New York Times published an article entitled "Move to Close Newspaper Is Greeted With Suspicion," and as the scandal reached the office of the British Prime Minister David Cameron, The New York Times published "Cameron Orders Two Inquiries Into Hacking Scandal as Former Aide Is Arrested."

On July 12, 2011 former British Prime Minister Gordon Brown accused the Rupert Murdock media empire, News International, of hiring known criminals to to gather personal information on his bank account, legal files and tax affairs. http://www.nytimes.com/2011/07/13/world/europe/13hacking.html

On July 17, 2011, as the scandal continued to spread to higher eschelons of Murdoch's empire in Britain and the U.S. The New York Times updated its timeline on the scandal at: http://www.nytimes.com/interactive/2010/09/01/magazine/05tabloid-timeline.html

On July 17, 2011 The New York Times also updated its graphic entitled Key Players in the Phone Hacking Scandal here: http://www.nytimes.com/interactive/2011/07/08/world/europe/20110708-key-players-in-the-phone-hacking-scandal.html?hp

View Map + Bookmark Entry

How Search Engines Have Become a Primary Form of External or Transactive Memory July 14, 2011

Betsy Sparrow of Columbia University, Jenny Liu, and Daniel M. Wegner of Harvard University published "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips," published online 14 July 2011, Science 5 August 2011: Vol. 333 no. 6043 pp. 776-778 DOI: 10.1126/science.1207745.


"The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves."

First two paragraphs (footnotes removed):

"In a development that would have seemed extraordinary just over a decade ago, many of us have constant access to information. If we need to find out the score of a ball game, learn how to perform a complicated statistical test, or simply remember the name of the actress in the classic movie we are viewing, we need only turn to our laptops, tablets, or smartphones and we can find the answers immediately. It has become so commonplace to look up the answer to any question the moment it occurs that it can feel like going through withdrawal when we can’t find out something immediately. We are seldom offline unless by choice, and it is hard to remember how we found information before the Internet became a ubiquitous presence in our lives. The Internet, with its search engines such as Google and databases such as IMDB and the information stored there, has become an external memory source that we can access at any time.

"Storing information externally is nothing particularly novel, even before the advent of computers. In any long-term relationship, a team work environment, or other ongoing group, people typically develop a group or transactive memory (1), a combination of memory stores held directly by individuals and the memory stores they can access because they know someone who knows that information. Like linked computers that can address each other’s memories, people in dyads or groups form transactive memory systems (2, 3). The present research explores whether having online access to search engines, databases, and the like, has become a primary transactive memory source in itself. We investigate whether the Internet has become an external memory system that is primed by the need to acquire information. If asked the question whether there are any countries with only one color in their flag, for example, do we think about flags or immediately think to go online to find out? Our research then tested whether, once information has been accessed, our internal encoding is increased for where the information is to be found rather than for the information itself."

An article by Alexander Bloom published in Harvard Magazine, November 2011 had this to say regarding the research:

"Wegner, the senior author of the study, believes the new findings show that the Internet has become part of a transactive memory source, a method by which our brains compartmentalize information. First hypothesized by Wegner in 1985, transactive memory exists in many forms, as when a husband relies on his wife to remember a relative’s birthday. '[It is] this whole network of memory where you don’t have to remember everything in the world yourself,' he says. 'You just have to remember who knows it.' Now computers and technology as well are becoming virtual extensions of our memory. The idea validates habits already forming in our daily lives. Cell phones have become the primary location for phone numbers. GPS devices in cars remove the need to memorize directions. Wegner points out that we never have to stretch our memories too far to remember the name of an obscure movie actor or the capital of Kyrgyzstan—we just type our questions into Google. 'We become part of the Internet in a way,' he says. 'We become part of the system and we end up trusting it.' "(http://harvardmagazine.com/2011/11/how-the-web-affects-memory, accessed 12-11-2011).

View Map + Bookmark Entry

Consumer Reports Begins Generating More Revenue from Digital Subscriptions than from Print August 2011

In August 2011, Consumer Reports (published by non-profit Consumers Union based in Yonkers, New York) which started its website in 1997, began generating more revenue from digital subscriptions than from print. Digital subscriptions grew from 557,000 in 2001 to 3.3 million in 2011.

Perhaps more remarkably, the digital success of Consumer Reports, did not come from cannibalizing its print subscriptions; print subscriptions held steady at about 4 million since 2001.

"Consumer Reports’ online success is not necessarily a bellwether for other Web sites seeking paying subscribers, says Bill Grueskin, dean of academic affairs at the Graduate School of Journalism at Columbia University and formerly managing editor of WSJ.com.

“ 'It isn’t much of a leap for people to pay $5.95 a month for access to a database that will help them make a wise purchase of a $500 dishwasher or a $25,000 car,' Mr. Grueskin says. 'It is much harder to get consumers — particularly those trained for the past 15 years to expect content for free — to pay for coverage of metro news, football games or politics' ” (http://www.nytimes.com/2011/12/11/business/media/consumer-reports-going-strong-at-75-digital-domain.html?_r=1&src=rechp, accessed 12-11-2011).

(This entry was last revised on 10-18-2014.)

View Map + Bookmark Entry

The First Neurosynaptic Chips August 2011

In August 2011, as part of the SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) project, IBM researchers led by Dharmendra S. Modha, manager and lead researcher of the Cognitive Computing Group at IBM Almaden Research Center, demonstrated two neurosynaptic cores that moved beyond von Neumann architecture and programming to ultra-low, super-dense brain-inspired cognitive computing chips. These new silicon, neurosynaptic chips would be the building blocks for computing systems that emulate the brain's computing efficiency, size and power usage.

View Map + Bookmark Entry

Non-Traditional Book Publishing on the Internet is 8X the Output of Traditional Book Publishing August 1, 2011

Librarians and information scientists Jana BradleyBruce Fulton, Marlene Helm, and Katherine A. Pittner of the University of Arizona, Tucson, published "Non-traditional book publishing," firstmonday.org, Vol. 16, No.8.


"Non–traditional book publishing, prospering on the Internet, now accounts for over eight times the output of traditional publishing. Non–traditional publishing includes books published by their authors and books representing the reuse of content, most of it not covered by copyright. The result is an heterogeneous, hyper–abundant contemporary book environment where the traditional mixes with the non–traditional and finding books that match a reader’s taste is more difficult than previously and may involve new methods of discovery."


The Bowker (2011a) annual statistics on book publishing for 2010, compiled from its Books in Print database, revealed startling news. The output of non–traditional titles was eight times as great as the number of mainstream published books. Traditional new titles are projected to number 316,480, a five percent increase over 2009, but nowhere near the 2,776,260 non–traditional titles reported by Bowker. These non–traditional books “are largely on–demand titles produced by reprint houses specializing in public domain works and by presses catering to self–publishers and “micro–niche’ publications”

"Bowker’s numbers, as startling as they are, do not cover the whole of the non–traditional book output. For the most part, Bowker counts books with International Standard Book Numbers (ISBN) [2] so non–traditional titles without ISBNs would not be included. Authors who self–publish using their own imprint may not be counted as non–traditional by Bowker. Additionally, the Bowker numbers do not take into account the flood of titles self–published for the Kindle Store through Kindle Direct Publishing (KDL). Non–traditional publishing, therefore, could easily represent a much larger number than the 2,776,260 reported by Bowker.  

"Non–traditional titles, according to Bowker in this same report, are marketed primarily on the Web. Another report by Bowker (2011b) shows that online retailers together are the single largest book–buying channel, making access to non–traditionally published books possible for a very large buying public. Non–traditional books, then, form a huge segment of books available in today’s book marketplace."  

View Map + Bookmark Entry

Filed under: Book History, Publishing

The Methodists' Handwritten Bible August 11, 2011

One of the distinctive aspects of the transition from manuscript copying to print that took place in the second half of the fifteenth century was that once the complete Bible had been printed by movable type it is generally understood that this very long text was, with perhaps a few exceptions, no longer copied out by hand by professional scribes in its entirety, as had been done in the centuries before the invention of printing.  Instead there was a steady stream of printed editions which were apparently able to meet the demand for Bibles at lower cost than manuscript copies.

Ironically, during transition presently under way from print to digital information, manuscript copying of the complete Bible text has been revived in a few instances, most notably the professionally written and illuminated St. John's Bible, and the social media production of The Methodists' Handwritten Bible, created by tens of thousands of people across Britain and Northern Ireland, which went online on August 11, 2011: 

"The Handwritten Bible contains 7,000 pages of text and illustrations transcribed by people from every part of Britain and further afield. More than 30,000 volunteers joined in from across communities - including prisons, schools, colleges, libraries, nursing homes, airports and shopping centres - to copy the whole of the NRSV translation of the Bible after Methodists voted to transcribe the Scripture at their Conference in Portsmouth last year. . . 

"As part of the 400th anniversary of the King James Bible, people were invited to join Methodists in handwriting verses from the Scripture. Verses have been written in English, Chinese, Welsh and Braille with accompanying illustrations" (http://www.methodist.org.uk/index.cfm?fuseaction=opentogod.newsDetail&newsid=524, accessed 11-23-2011).

View Map + Bookmark Entry

Google Acquires Smart-Phone Maker Motorola Mobility; Sells its Hardware Division in January 2014 August 15, 2011 – January 2014

On August 15, 2011 Google announced that it agreed to acquire the smart-phone manufacturer Motorola Mobility, headquarted in Libertyville, Illinois, for $12,5 billion. This was Google's largest acquisition to date.

"In a statement, Google said the deal was largely driven by the need to acquire Motorola's patent portfolio, which it said would help it defend Android against legal threats from competitors armed with their own patents. This issue has come to the fore since a consortium of technology companies led by Apple and Microsoft purchased more than 6,000 mobile-device-related patents from Nortel Networks for about $4.5 billion, in early July. Battle lines are being drawn around patents, as companies seek to protect their interests in the competitive mobile industry through litigation as well as innovation.  

"However, as people increasingly access the Web via mobile devices, the acquisition could also help Google remain central to their Web experience in the years to come. As Apple has demonstrated with its wildly popular iPhone, this is far easier to achieve if a company can control the hardware, as well as the software, people carry in their pockets. Comments made by Google executives hint that Motorola could also play a role in shaping the future of the Web in other areas—for instance, in set-top boxes. Motorola is by far Google's largest acquisition, and it takes the company into uncertain new territory. The deal is also likely to draw antitrust scrutiny because of the reach Google already has with Android, which runs on around half of all smart phones in the United States.  

"Motorola, which makes the Droid smart phone, went all-in with Google's Android platform in 2008, declaring that all of its devices would use the open-source mobile operating system.  

"Before his departure as Google CEO, Eric Schmidt had begun pressing Google employees to shift their attention to mobile. Cofounder and new CEO Larry Page seems determined to maintain this change of focus. In a conference call this morning, he told investors, 'It's no secret that Web usage is increasingly shifting to mobile devices, a trend I expect to continue. With mobility continuing to take center stage in the computing revolution, the combination with Motorola is an extremely important event in Google's continuing evolution that will drive a lot of improvements in our ability to deliver great user experiences.' " (http://www.technologyreview.com/web/38320/?nlid=nldly&nld=2011-08-16, accessed 08-17-2011).

On January 29, 2014 Larry Page, CEO of Google published in the Google Official Blog that they were selling Motorola's handset division for a multi-billion dollar loss:

"We’ve just signed an agreement to sell Motorola to Lenovo for $2.91 billion. As this is an important move for Android users everywhere, I wanted to explain why in detail. 

"We acquired Motorola in 2012 to help supercharge the Android ecosystem by creating a stronger patent portfolio for Google and great smartphones for users. Over the past 19 months, Dennis Woodside and the Motorola team have done a tremendous job reinventing the company. They’ve focused on building a smaller number of great (and great value) smartphones that consumers love. Both the Moto G and the Moto X are doing really well, and I’m very excited about the smartphone lineup for 2014. And on the intellectual property side, Motorola’s patents have helped create a level playing field, which is good news for all Android’s users and partners.

"But the smartphone market is super competitive, and to thrive it helps to be all-in when it comes to making mobile devices. It’s why we believe that Motorola will be better served by Lenovo—which has a rapidly growing smartphone business and is the largest (and fastest-growing) PC manufacturer in the world. This move will enable Google to devote our energy to driving innovation across the Android ecosystem, for the benefit of smartphone users everywhere. As a side note, this does not signal a larger shift for our other hardware efforts. The dynamics and maturity of the wearable and home markets, for example, are very different from that of the mobile industry. We’re excited by the opportunities to build amazing new products for users within these emerging ecosystems.

"Lenovo has the expertise and track record to scale Motorola into a major player within the Android ecosystem. They have a lot of experience in hardware, and they have global reach. In addition, Lenovo intends to keep Motorola’s distinct brand identity—just as they did when they acquired ThinkPad from IBM in 2005. Google will retain the vast majority of Motorola’s patents, which we will continue to use to defend the entire Android ecosystem."

View Map + Bookmark Entry

Free Online Artificial Intelligence Course Attracts 58,000 Students August 15, 2011

Sebastian Thrun, Research Professor Computer Science at Stanford and a leading roboticist, and Peter Norvig, Director of Research at Google, Inc., in partnership with the Stanford University School of Engineering, offered a free online course entitled An Introduction to Artificial Intelligence

According to an article by John Markoff in The New York Times, by August 15, 2011 more than 58,000 students from around the world registered for this free course— nearly four times Stanford's entire student body.

"The online students will not get Stanford grades or credit, but they will be ranked in comparison to the work of other online students and will receive a 'statement of accomplishment.'

"For the artificial intelligence course, students may need some higher math, like linear algebra and probability theory, but there are no restrictions to online participation. So far, the age range is from high school to retirees, and the course has attracted interest from more than 175 countries" (http://www.nytimes.com/2011/08/16/science/16stanford.html?hpw, accessed 08-16-2011).

One fairly obvious reason why so many studients signed up is that Norvig is famous in the field as the co-author with Stuart Russell of the standard textbook on AI, Artificial Intelligence: A Modern Approach (first edition: 1995), which has been translated into many languages and has sold over 200,000 copies.

View Map + Bookmark Entry

Interactive Reading and Spelling on the iPad August 18, 2011

"Word Wizard ($3.99) turns your iPad into a talking typewriter, and a powerful language-learning tool that is ideal for a child learning to read.

"To build a word, you simply touch a letter and drag it next to another letter. It snaps into place and pronounces the result in clear speech. This is an important breakthrough in reading instruction, because it leverages the iPad’s size, powerful speech synthesis abilities and touchscreen, so that every letter can be a building block of phonetically accurate sound.

"There are two modes: Movable Alphabet, for free exploration of word combinations; and Spelling Quiz, a talking spelling test with 173 built-in word lists (e.g., nature words, or 1,000 most frequently used words). In the spelling tests, you hear the word, and must spell it using the same alphabet strip used in the Movable Alphabet. Because the letters are arranged alphabetically, this is not good for typing or fast text entry. There’s a British voice mode, plus the ability to change the speed or tone of the voice, uppercase or lowercase letters, and two backgrounds. And yes, even vulgarities are read out loud, in clear speech. Consider yourself warned" (http://gadgetwise.blogs.nytimes.com/2011/08/10/speak-n-spell-for-the-ipad-generation/?nl=technology&emc=cta3, accessed 08-18-2011).

"L'Escapadou is a family design studio dedicated to creating fun, creative and entertaining apps for iPad and iPhone, with a focus on educational apps for kids.  

"We are a homeschooling family, and watching our children - 4 and 7 years old - learn has always been a great inspiration for the educational tools we make. Most have also been inspired by the Montessori method. We also have a strong belief that creativity is essential to a kid’s development and well-being, which led us to create toys such as Draw with Stars !  

"All the family is working to create great user experience for kids. Design and graphics are done by Dad and Mum, Programming is done by Dad, and testing and feedback is done by our two daughters !  

"L'Escapadou was created after the launch of the iPad, with the belief that the iPad is a great tool for kids to learn and be creative. “Dad” has developped and designed applications on Apple computers since Apple IIe and holds a PhD in computer Science, and “Mom” is a translator currently busy home-educating her daughters" (http://lescapadou.com/LEscapadou_-_Fun_and_Educational_applications_for_iPad_and_IPhone/About.html, accessed 03-26-2012).

http://blog.lescapadou.com/2011/10/how-ive-made-200000-in-ios-education.html?spref=bl, accessed 03-26-2012.

View Map + Bookmark Entry

Toward Cognitive Computing Systems August 18, 2011

On August 18, 2011 "IBM researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. The technology could yield many orders of magnitude less power consumption and space than used in today’s computers. 

"In a sharp departure from traditional concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry. Its first two prototype chips have already been fabricated and are currently undergoing testing.  

"Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember – and learn from – the outcomes, mimicking the brains structural and synaptic plasticity.  

"To do this, IBM is combining principles from nanoscience, neuroscience and supercomputing as part of a multi-year cognitive computing initiative. The company and its university collaborators also announced they have been awarded approximately $21 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

"The goal of SyNAPSE is to create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment – all while rivaling the brain’s compact size and low power usage. The IBM team has already successfully completed Phases 0 and 1.  

" 'This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century,' said Dharmendra Modha, project leader for IBM Research. 'Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture. These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government.' " (http://www-03.ibm.com/press/us/en/pressrelease/35251.wss, accessed 08-21-2011).

View Map + Bookmark Entry

The First Complete Album Composed Solely by Computer and Recorded by Human Musicians September 2011 – July 2, 2012

In September 2011 the Iamus computer cluster developed by Francisco Vico and associates at the Universidad de Málaga produced a composition entitled Hello World! This classical clarinet-volin-piano trio was called the first full-scale work entirely composed by a computer without any human intervention, and automatically written in a fully-fledged score using conventional musical notation.

Several months later, on July 2, 2012 four compositions by the Iamus computer premiered, and were broadcast live from the School of Computer Science at Universidad de Málaga, as one of the events included in the Alan Turing year. The compositions performed at this event were later recorded by the London Symphony Orchestra, and issued in 2012 as the album entitled Iamus. This compact disc was characterized by the New Scientist as the "first complete album to be composed solely by a computer and recorded by human musicians."

Commenting on the authenticity of the music, Stephen Smoliar, critic of classical music at The San Francisco Examiner, wrote in a piece entitled "Thoughts about Iamus and the composition of music by computer," Examiner.com, January 4, 2013:

"However, where listening is concerned, the method leading to the notation is secondary. What is primary is the act of making the music itself engaged by the performers and how the listener responds to what those performers do. Put another way, the music is in the performance, rather than in the composition without which that performance would not take place. The issue is not, as Smith seems to imply at the end of her BBC report, whether 'a computer could become a more prodigious composer than Mozart, Haydn, Brahms and Beethoven combined.' The computer is only prodigious at creating more documents, and what is most interesting about the documents generated by Iamus is their capacity to challenge the creative talents of performing musicians."

View Map + Bookmark Entry

Snapchat: Communication and Automatic Destruction of Information September 2011

In September 2011 Stanford University students Evan Spiegel and Robert Murphy produced the initial release of the photo messaging application Snapchat, famously launching the program "from Spiegel's father's living room." Users of the app take photos, record videos, add text and drawings, and send them to a controlled list of recipients. Photographs and videos sent through the app are known as "Snaps". Users set a time limit for how long recipients can view their Snaps, after which the photos or videos are hidden from the recipient's device and deleted from Snapchat's servers. In December 2013 the range was from 1 to 10 seconds. 

In November 2013 it was reported that Snapchat was sharing 400 million photos per day—more than Facebook.

"Founder Evan Spiegel explained that Snapchat is intended to counteract the trend of users being compelled to manage an idealized online identity of themselves, which he says has "taken all of the fun out of communicating". Snapchat can locate a user's friends through the user's smartphone contact list. Research conducted in the UK has shown that, as of June 2013, half of all 18 to 30-year-old respondents (47 percent) have received nude pictures, while 67 percent had received images of "inappropriate poses or gestures".

"Snapchat launched the "Snapchat Stories" feature in early October 2013 and released corresponding video advertisements with the tagline "It's about time." The feature allows users to create links of shared content that can be viewed an unlimited number of times over a 24-hour period. The "stories" are simultaneously shared with the user's friends and content remains for 24 hours before disappearing.

"Another controversy surrounding the rising popularity of Snapchat in the United States relates to a phenomenon known as sexting. This involves the sending and receiving of explicit images that often involve some degree of nudity. Because the application is commonly used by younger generations, often below the age of eighteen, the question has been raised whether or not certain users are technically distributing child pornography. For this reason, many adults disapprove of their children's use of the application. Snapchat's developers continue to insist that the application is not sexting-friendly and that they do not condone any kind of pornographic use.

"On November 14, 2013, police in LavalQuebec, Canada arrested 10 boys aged 13 to 15 on child pornography charges after the boys allegedly captured and shared explicit photos of teenage girls sent through Snapchat as screenshots.

"In February 2013, a study by market research firm Survata found that mobile phone users are more likely to "sext over SMS than over Snapchat" (Wikipedia article on Snapchat, accessed 12-12-2013).

View Map + Bookmark Entry

Michael Hart, Father of eBooks & Founder of Project Gutenberg, Dies September 6, 2011

"AMONG the episodes in his life that didn’t last, that were over almost before they began, including a spell in the army and a try at marriage, Michael Hart was a street musician in San Francisco. He made no money at it, but then he never bought into the money system much—garage-sale T-shirts, canned beans for supper, were his sort of thing. He gave the music away for nothing because he believed it should be as freely available as the air you breathed, or as the wild blackberries and raspberries he used to gorge on, growing up, in the woods near Tacoma in Washington state. All good things should be abundant, and they should be free.  

"He came to apply that principle to books, too. Everyone should have access to the great works of the world, whether heavy (Shakespeare, 'Moby-Dick', pi to 1m places), or light (Peter Pan, Sherlock Holmes, the 'Kama Sutra'). Everyone should have a free library of their own, the whole Library of Congress if they wanted, or some esoteric little subset; he liked Romanian poetry himself, and Herman Hesse’s 'Siddhartha'. The joy of e-books, which he invented, was that anyone could read those books anywhere, free, on any device, and every text could be replicated millions of times over. He dreamed that by 2021 he would have provided a million e-books each, a petabyte of information that could probably be held in one hand, to a billion people all over the globe—a quadrillion books, just given away. As powerful as the Bomb, but beneficial.

"That dream had grown from small beginnings: from him, a student at the University of Illinois in Urbana, hanging round a huge old mainframe computer on the night of the Fourth of July in 1971, with the sound of fireworks still in his ears. The engineers had given him by his reckoning $100m-worth of computer time, in those infant days of the internet. Wondering what to do, ferreting in his bag, he found a copy of the Declaration of Independence he had been given at the grocery store, and a light-bulb pinged on in his head. Slowly, on a 50-year-old Teletype machine with punched-paper tape, he began to bang out 'When in the Course of human events…'  

"This was the first free e-text, and none better as a declaration of freedom from the old-boy network of publishing. What he typed could not even be sent as an e-mail, in case it crashed the ancient Arpanet system; he had to send a message to say that it could be downloaded. Six people did, of perhaps 100 on the network. It was followed over years by the Gettysburg Address, the Constitution and the King James Bible, all arduously hand-typed, full of errors, by Mr Hart. No one particularly noticed. He mended people’s hi-fis to get by. Then from 1981, with a growing band of volunteer helpers scanning, rather than typing, a flood of e-texts gathered. By 2011 there were 33,000, accumulating at a rate of 200 a month, with translations into 60 languages, all given away free. No wonder money-oriented rivals such as Google and Yahoo! sprang up all round as the new century dawned, claiming to have invented e-books before him. He called his enterprise Project Gutenberg. This was partly because Gutenberg with his printing press had put wagonloads of books within the reach of people who had never read before; and also because printing had torn down the wall between haves and have-nots, literate and illiterate, rich and poor, until whole power-structures toppled. Mr Hart, for all his burly, hippy affability, was a cyber-revolutionary, with a snappy list of the effects he expected e-books to have:

Books prices plummet.

Literacy rates soar.

Education rates soar.

Old structures crumble, as did the Church.

Scientific Revolution.

Industrial Revolution.

Humanitarian Revolution.

"If all these upheavals were tardier than he hoped, it was because of the Mickey Mouse copyright laws. Every time men found a speedier way to spread information to each other, government made it illegal. During the lifetime of Project Gutenberg alone, the average time a book stayed in copyright in America rose from 30 to almost 100 years. Mr Hart tried to keep out of trouble, posting works that were safely in the public domain, but chafed at being unable to give away books that were new, and fought all copyright extensions like a tiger. “Unlimited distribution” was his mantra. Give everyone everything! Break the bars of ignorance down!

"The power of plain words

"He lived without a mobile phone, in a chaos of books and wiring. The computer hardware in his basement, from where he kept an unbossy watch over the whole project, often not bothering to pick up his monthly salary, was ten years old, and the software 20. Simple crowdsourcing was his management style, where people scanned or keyed in works they loved and sent them to him. Project Gutenberg books had a frugal look, with their Plain Vanilla ASCII format, which might have been produced on an old typewriter; but then it was content, not form, that mattered to Mr Hart. These were great thoughts, and he was sending them to people everywhere, available to read at the speed of light, and free as the air they breathed." (http://www.economist.com/node/21530075, accessed 09-27-2011).

♦ For another obituary of Michael Hart, of Urbana, Illinois, I recommend that in Brewster Kahle's Blog, post of September 7, 2011.

View Map + Bookmark Entry

The First Commercial Application of the IBM Watson Question Answering System: Medical Diagnostics September 12, 2011

Health Care insurance provider WellPoint, Inc. and IBM announced an agreement to create the first commercial applications of the IBM Watson question answering system. Under the agreement, WellPoint would develop and launch Watson-based solutions to help improve patient care through the delivery of up-to-date, evidence-based health care for millions of Americans, while IBM would develop the Watson healthcare technology on which WellPoint's solution will run.

View Map + Bookmark Entry

Amazon Introduces the Kindle Fire September 28 – November 14, 2011

On September 28, 2011 Amazon announced the Kindle Fire, a tablet computer version of Amazon.com's Kindle e-book reader, with a  7" color multi-touch display with IPS technology, running a forked version of Google's Android operating system. The device, which included access to the Amazon Appstore, streaming movies and TV shows, and Kindle's e-books, was released on November 14, 2011 for $199.

In January 2012 Amazon advertised that there were 19 million movies, TV shows, songs, magazines, and books available for the Kindle Fire.

View Map + Bookmark Entry

Steve Jobs Dies October 5, 2011

Steve Jobs, one of the most influential and daring innovators in the history of media, and arguably the most innovative and influential figure in the computer industry since the development of the personal computer, died at the age of 55 after a well-publicized battle with pancreatic cancer. Responsible, as inspirational leader, for building the first commercially successful personal computer (Apple II), for developing and popularizing the graphical user interface (Macintosh) which made personal computers user friendly, for developing desktop publishing, for making music truly portable (iPod, iTunes), for bringing all the elements of the personal computer to cell phones (iPhone), for causing the widespread acceptance of tablet computers (iPad), Jobs not only rescued Apple Computer from near failure and made it for a time the most valuable company in the S&P 500, but also achieved great success through his ownership of Pixar Animation Studios, which he eventually sold to The Walt Disney Company. Characteristics of Jobs' style were exceptional boldness in the conception of products, high quality and ease of use, and elegance of industrial design.

"Mr. Jobs even failed well. NeXT, a computer company he founded during his years in exile from Apple, was never a commercial success. But it was a technology pioneer. The World Wide Web was created on a NeXT computer, and NeXT software is the core of Apple’s operating systems today" (http://www.nytimes.com/2011/10/09/business/steve-jobs-and-the-power-of-taking-the-big-chance.html?hp).

An article published in The New York Times on October 8, 2011 compared and contrasted the lives and achievements of Steve Jobs with that earlier great American inventor and innovator, Thomas Alva Edison.

View Map + Bookmark Entry

What Would an Infinite Digital Bookcase Look Like? October 18, 2011

Digital information is not constrained by traditional limitations of cost and space, raising the possibility of collecting and presenting virtually unlimited numbers of books. What would an "infinite digital bookcase" look like and how would it work?

On October 18, 2011 Google demonstrated a spectacular design in the "Official Google Blog:

"As digital designers, we often think about how to translate traditional media into a virtual space. Recently, we thought about the bookcase. What would it look like if it was designed to hold digital books?

"A digital interface needs to be familiar enough to be intuitive, while simultaneously taking advantage of the lack of constraints in a virtual space. In this case, we imagined something that looks like the shelves in your living room, but is also capable of showcasing the huge number of titles available online—many more than fit on a traditional shelf. With this in mind, we designed a digital bookcase that’s an infinite 3D helix. You can spin it side-to-side and up and down with your mouse. It holds 3D models of more than 10,000 titles from Google Books.  

"The books are organized into 28 subjects. To choose a subject, click the subject button near the top of your screen when viewing the bookcase. The camera then flies to that subject. Clicking on a book pulls it off the shelf and brings it to the front and center of the screen. Click on the high-resolution cover and the book will open to a page with title and author information as well as a short synopsis, provided by the Google Books API. All of the visuals are rendered with WebGL, a technology in Google Chrome and other modern browsers that enables fast, hardware-accelerated 3D graphics right in the browser, without the need for a plug-in.  "If you’ve finished your browsing and find a book you want to read, you can click the “Get this book” button on the bottom right of the page, which will send you to that book’s page on books.google.com. Or, you can open the title on your phone or tablet via the QR code that’s in the bottom left corner of the page, using a QR code app like Google Goggles. You can also browse just free books by selecting the “Free Books” subject in the subject viewer.

"Bookworms using a modern browser can try the WebGL Bookcase today." 

View Map + Bookmark Entry

"Zero to Eight: Children's Media Use in America" October 25, 2011

On October 25, 2011 Common Sense Media of San Francisco issued Zero to Eight: Children's Media Use in America by Vicky Rideout. Some of the key findings of their report were:

"Even very young children are frequent digital media users.

"MOBILE MEDIA. Half (52%) of all children now have access to one of the newer mobile devices at home: either a smartphone (41%) a video iPod (21%), or an iPad or ther tablet device (8%). More than a quarter (29%) of all parents have downloaded 'apps'. . . for their children to use. And more than a third (36%) of children have ever used one of these new mobile devices, including 10% of 0-to 1-year-olds, 39% of 2-to 4-year-olds, and 52% of 5- to 8-year-olds. In a typical day 11% of all 0-to 8 year-year olds use a cell phone, iPod, iPad, or similar device for media consumption and those who do spend an average of :43 doing so.  

"COMPUTERS. Computer use is pervasive among very young children, with half (53%) of all 2- to 4-year-olds having ever used a computer, and nine out of ten (90%) 5- to 8-year-olds having done so. For many of these children, computer use is a regular occurrence: 22% of 5 to 8-year olds use a computer at least once a day and another 46% use it at least once a week. Even among 2- to 4-year-olds, 12% use a computer every day, with another 24% doing so at least once a week. Among all children who have used a computer, the average age of first use was just 3 1/2 years old.

"VIDEO GAMES. Playing console video games is also popular among these young children: Half (51%) of all 0- to 8-year-olds have ever played a console video game, including 44% of 2- to 4-year-olds and
81% of 5- to 8-year-olds. Among those who have played console video games, the average age at first use was just under 4 years old (3 years and 11 months). Among 5- to 8-year-olds, 17% play console
video games at least once a day, and another 36% play them at least once a week. . . .

"Children under 2 spend twice as much time watching
TV and videos as they do reading books.

"In a typical day, 47% of babies and toddlers ages 0 through 1 watch TV or DVDs, and those who do watch spend an average of nearly two hours (1:54) doing so. This is an average of :53 among all children
in this age group, compared to an average of :23 a day reading or being read to. Nearly one in three (30%) has a TV in their bedroom. In 2005, among children ages 6-23 months, 19% had a TV in their
bedroom. Looking just at 6- to 23-month-olds in the current study, 29% have a TV in their bedroom. . . .

"Media use varies significantly by race and socio-economic status, but not much by gender.

"RACE AND SOCIO-ECONOMIC STATUS. African- American children spend an average of 4:27 a day with media (including music, reading, and screen media), compared to 2:51 among white children and 3:28 among Hispanics. Children from higher- income families or with more highly educated parents spend less time with media than other children do (for example, 2:47 a day among higher-income children vs. 3:34 among lower-income youth). Twenty percent of children in upper income homes have a TV in their bedroom, compared to 64% of those from lower- income homes. 

"GENDER. The only substantial difference between boys’ and girls’ media use is in console video games. Boys are more likely to have ever played a console video game than girls are (56% vs. 46%), to have a video game player in their bedroom (14% vs. 7%), and to play console video games every day (14% vs. 5%). Boys average :16 a day playing console games, compared to an average of :04 a day for girls."

View Map + Bookmark Entry

Room to Read Donates its 10,000,000th Book October 28, 2011

Room to Read, a non-profit founded by former Microsoft marketing executive John Wood in 2000, and based in San Francisco, California, donated its 10 millionth book to its network of over 12,000 libraries in nine developing countries.

View Map + Bookmark Entry

Texting During the Climb up El Capitan in Yosemite November 2011

Texting in Unusual Contexts:  For more than two weeks in November 2011 climber Tommy Caldwell lived on a nylon ledge hung 1,200 feet up El Capitan, the massive sweep of granite standing sentinel over Yosemite Valley. 

"One of the world’s best all-around rock climbers, he slept on the ledge, cooked on the ledge and went to the bathroom into a receptacle hanging below the ledge. And at the top of this solitary, silent sport, he was being watched by thousands of spectators around the world. . . .

"Caldwell updated his progress on Facebook using his iPhone, which he charged with portable solar panels on the wall. His fans, more than 4,000 of whom he accumulated during his climb, could follow along in real time with commentary from the climber himself. No need to wait days, weeks or months for a print article or video. The Dawn Wall, as Caldwell’s project is known, is the latest example of what has become an increasingly accepted practice among professional climbers and the wider climbing community: from-the-route social media. Observers enjoy it, sponsors encourage it and climbers get to share what is inherently a selfish pursuit" (http://www.nytimes.com/2011/12/10/sports/as-climbers-go-text-it-on-the-mountain-reaction-is-divided.html?hp).

View Map + Bookmark Entry

A Silicon Chip that Mimics How the Brain's Synapses Change in Response to New Information November 2011

In November 2011, a group of MIT researchers created the first computer chip that mimicked how the brain's neurons adapt in response to new information. This biological phenomenon, known as plasticity, is analog, ion-based communication in a synapse between two neurons. With about 400 transistors, the silicon chip can simulated the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. 

"There are about 100 billion neurons in the brain, each of which forms synapses with many other neurons. A synapse is the gap between two neurons (known as the presynaptic and postsynaptic neurons). The presynaptic neuron releases neurotransmitters, such as glutamate and GABA, which bind to receptors on the postsynaptic cell membrane, activating ion channels. Opening and closing those channels changes the cell’s electrical potential. If the potential changes dramatically enough, the cell fires an electrical impulse called an action potential.

"All of this synaptic activity depends on the ion channels, which control the flow of charged atoms such as sodium, potassium and calcium. Those channels are also key to two processes known as long-term potentiation (LTP) and long-term depression (LTD), which strengthen and weaken synapses, respectively. "

"The MIT researchers designed their computer chip so that the transistors could mimic the activity of different ion channels. While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell. 

“ 'We can tweak the parameters of the circuit to match specific ion channels,” Poon says. 'We now have a way to capture each and every ionic process that’s going on in a neuron.'

"Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says" (http://www.mit.edu/newsoffice/2011/brain-chip-1115.html, accessed 01-01-2014).

Rachmuth, G., Shouvai, H., Bear, M., Poon, C. "A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity," Proceedings of the National Academy of Sciences 108, no. 459, December 6, 2011, E1266-E1274, doi: 10.1073/pnas.1106161108

View Map + Bookmark Entry

Action Comics #1 Superman sells for $2.16 Million November 11 – November 30, 2011

A nearly pristine copy of the first issue of Action Comics, containing the first appearance of Superman, sold for $2.16 million. The copy, which may have been stolen from the collection of the actor Nicholas Cage, was graded at 9.0 on a scale of 1 to 10. The copy was auctioned starting November 11 online at www.comicconnect.com with a reserve price of $900,000. The auction sale was completed on November 30, 2011. Neither the name of the buyer nor seller was disclosed by the auction house.

Though 200,000 copies were printed, only about 100 copies of Action Comics No. 1 are believed to be in existence, and only a handful of those in good condition.  

View Map + Bookmark Entry

The Swedish Twitter University Begins November 14, 2011

Rachel Armstrong, architectural designer, Senior TED Fellow, Co-Director AVATAR (Advanced Virtual and Technological Architectural Research Laboratory), University of Greenwich, presented "Beyond Sustainability #STU01" at Svenska Twitteruniversititetet, the Swedish Twitter University.

The Swedish Twitter University, founded by Marcus Nilsson, conducts micro-courses that consist of 25 tweets (i.e. 140-character messages) that are presented over an appointed hour, during which the instructor addresses questions, also in the form of tweets. It is unclear whether this "university," which might more accurately be characterized as an educational forum, has any association with a physical address; it appears to exist only in cyberspace.

View Map + Bookmark Entry

Digital Books Represent 25% of Sales of Some Categories of Books but Less than 5% of Childrens' Books November 20, 2011

An article entitled "For Their Children, Many E-Book Fans Insist on Paper," published in The New York Times, suggested that many parents, including those highly sophisticated with computing and the Internet, believe that children learn to read most efficiently from physical books because interacting with the physical object continues to have value beyond content alone, especially, it is believed, in developmental stages of reading, and in learning the "reading habit." Electronic books and e-book readers, they believe, represent distractions for young children. 

"As the adult book world turns digital at a faster rate than publishers expected, sales of e-books for titles aimed at children under 8 have barely budged. They represent less than 5 percent of total annual sales of children’s books, several publishers estimated, compared with more than 25 percent in some categories of adult books" (http://www.nytimes.com/2011/11/21/business/for-their-children-many-e-book-readers-insist-on-paper.html?hp, accessed 11-20-2011)

View Map + Bookmark Entry

Rapid Growth of the Digital Textbook Market in the U.S. November 23, 2011

"According to the Student Monitor, a private student market research company based in [Ridgewood] New Jersey, about 5 percent of all textbooks acquired in the autumn in the United States were digital textbooks. That is more than double the 2.1 percent of the spring semester.  

"Simba Information, a research company specializing in publishing, estimates that electronic textbooks will generate $267.3 million this year in sales in the United States. That is a rise of 44.3 percent over last year. The American Association of Publishers estimates that the college textbooks industry generated a total of $4.58 billion in sales last year.

"Kathy Micky, a senior analyst at Simba, said digital textbooks were expected 'to be the growth driver for the industry in the future.' Her company estimates that by 2013, digital textbooks will make up 11 percent of the textbook market revenue" (http://www.nytimes.com/2011/11/24/world/americas/schoolwork-gets-swept-up-in-rush-to-go-digital.html?hpw, accessed 11-25-2011).

View Map + Bookmark Entry

Google Maps 6.0 for Android Introduces Indoor Maps and a "My Location" Feature November 29, 2011

“ 'Where am I?' and 'What's around me?' are two questions that cartographers, and Google Maps, strive to answer. With Google Maps’ 'My Location' feature, which shows your location as a blue dot, you can see where you are on the map to avoid walking the wrong direction on city streets, or to get your bearings if you’re hiking an unfamiliar trail. Google Maps also displays additional details, such as places, landmarks and geographical features, to give you context about what’s nearby. And now, Google Maps for Android enables you to figure out where you are and see where you might want to go when you’re indoors.

"When you’re inside an airport, shopping mall or retail store, a common way to figure out where you are is to look for a freestanding map directory or ask an employee for help. Starting today, with the release of Google Maps 6.0 for Android, that directory is brought to the palm of your hands, helping you determine where you are, what floor you're on, and where to go indoors.

"Detailed floor plans automatically appear when you’re viewing the map and zoomed in on a building where indoor map data is available. The familiar 'blue dot' icon indicates your location within several meters, and when you move up or down a level in a building with multiple floors, the interface will automatically update to display which floor you’re on. All this is achieved by using an approach similar to that of ‘My Location’ for outdoor spaces, but fine tuned for indoors." (http://googleblog.blogspot.com/2011/11/new-frontier-for-google-maps-mapping.html, accessed. 12-1-2011)

View Map + Bookmark Entry

The Cost of Sequencing a Human Genome Drops to $10,500 November 30, 2011

"The cost of sequencing a human genome — all three billion bases of DNA in a set of human chromosomes — plunged to $10,500 last July from $8.9 million in July 2007, according to the National Human Genome Research Institute.  

"That is a decline by a factor of more than 800 over four years. By contrast, computing costs would have dropped by perhaps a factor of four in that time span.  

"The lower cost, along with increasing speed, has led to a huge increase in how much sequencing data is being produced. World capacity is now 13 quadrillion DNA bases a year, an amount that would fill a stack of DVDs two miles high, according to Michael Schatz, assistant professor of quantitative biology at the Cold Spring Harbor Laboratory on Long Island.

"There will probably be 30,000 human genomes sequenced by the end of this year, up from a handful a few years ago, according to the journal Nature. And that number will rise to millions in a few years" (http://www.nytimes.com/2011/12/01/business/dna-sequencing-caught-in-deluge-of-data.html?_r=1&hp, accessed 12-02-2011).

View Map + Bookmark Entry

Signalling the Shift from Print to Digital and to More Accurate Metrics of the Effectiveness of Advertising November 30, 2011

Time Warner hired Laura Lang, former CEO of Digitas, which characterized itself as the largest digital advertising agency, "with over 3000 employees in 32 offices across 19 countries," to run its Time, Inc. magazine division, signalling a shift in focus from print to digital at the largest magazine publisher in the United States, and a transition to more accurate metrics of the effectiveness of advertising.

"It’s a bold hire and Ms. Lang has an excellent reputation, but it’s a bracing moment for the print romantics among us. Time Inc., the home of Olympian brands like Time, People and Fortune, will be run by an executive who would not know a print run from a can of green beans.  

"As recently as, well, the day before Ms. Lang was hired, it would have been unthinkable that a large consumer magazine group would be run by someone with plenty of experience buying ads for clients, but with no experience selling them. But Ms. Lang knows other things that could come in handy, including how to use multimedia and social media to increase reader engagement in a way magazines rarely achieve.  

"As the head of Digitas, a unit of the Publicis Groupe, she was at the vanguard of a movement to direct advertising dollars toward specific audiences and away from big advertising buys adjacent to articles — in other words, away from businesses like Time Inc.  

"As far back as five years ago she articulated the shift.  

“ 'We’re seeing clients shift dollars into channels that can get a direct engagement, that can get a direct, accountable experience' she said in an interview with Direct, a marketing industry publication.  

"That doesn’t sound like a two-page ad spread in Fortune to me. 

"Traditional media has historically done well by selling inefficiency. In order to reach those among People magazine’s 3.5 million readers who were interested in buying a car or a coffeepot, you had to buy an ad that everyone else flipped past. As a serious practitioner of the science of audience-and-data-driven buys, Ms. Lang helped clients erase those inefficiencies through targeted buys, allowing them to get the milk without having to buy the whole cow.  

"A good magazine will do many things for a brand, including bestowing luster and creating awareness by osmosis. What magazines have not been able to do is to provide reliable measures of effectiveness. Part of the reason that magazine companies have so eagerly hopped on the iPad and other tablets is that those products will finally be able to provide data showing a return on the investment of advertising dollars. It isn’t a reach to bet that Ms. Lang will help magazine publishers be a part of a media age built on metrics" (http://www.nytimes.com/2011/12/05/business/media/at-time-inc-a-leader-to-help-it-fit-the-new-digital-order.html?_r=1&src=dayp, accessed 12-11-2011).

View Map + Bookmark Entry

The First Widely Accepted Index of the Talmud December 2011

Immigration lawyer and Talmudic scholar Daniel Retter compiled the first widely accepted index of the more than 1.8 million word Talmud roughly 1500 years after the Talmud was compiled. Compilation of the index took Retter seven years. Under the title of HaMaTeach, Retter's work was published by Feldheim Publishers, Nanuet, New York, in both Hebrew and English editions. The two volume index has 6,600 topical entries and 27,000 subtopical entries that point students to the treatises and pages of text they are seeking.

View Map + Bookmark Entry

Amazon.com Sold More Than 4 Million Kindles in December 2011 December 2011

"SEATTLE--(BUSINESS WIRE)--Dec. 29, 2011-- (NASDAQ: AMZN) - Amazon.com, Inc. today announced that 2011 was the best holiday ever for the Kindle family as customers purchased millions of Kindle Fires and millions of Kindle e-readers. Authors also continue to benefit from the success of Kindle — the #1 and #4 best-selling Kindle books released in 2011 were both published independently by their authors using Kindle Direct Publishing (KDP)."

View Map + Bookmark Entry

Statistics on European and U.S. eBook Sales December 1, 2011

". . . Electronic book sales are growing quickly. The European Federation of Publishers, an industry group based in Brussels, estimated that e-book sales would rise 20 percent or more this year from an estimated €350 million, or $462 million, in 2010.  

"Sales of printed books, which account for more than 98 percent of all book purchases, are stagnating. Sales of all books reached €23.5 billion last year, down 2 percent after adjusting for currency fluctuations, from their level in 2007." 

"In 2010, U.S. e-book sales rose to $878 million, or 6.4 percent of the trade book market, according to BookStats, an annual survey of the Association of American Publishers and the Book Industry Study Group. In adult fiction, e-books accounted for 13.6 percent of all revenue in 2010, the group said" (http://www.nytimes.com/2011/12/02/technology/eu-e-book-sales-hampered-by-tax-structure.html?_r=1&hpw, accessed 12-02-2011).

View Map + Bookmark Entry

More than 10 Billion Android Apps Downloaded December 6, 2011

According to the Official Google Blog, app downloads from the Android Market at the beginning of December 2011 exceeded 10 billion downloads, with a growth rate of one billion app downloads per month.

View Map + Bookmark Entry

100 Million Words Translated per Week by Google Translate December 8, 2011

According to an infographic released by Google, in December 2011 100 million words in 200 different languages were translated weekly by Google Translate. 

View Map + Bookmark Entry

IBM's Watson Question Answering System to Team with Cedars-Sinai Oschin Comprehensive Cancer Institute December 16, 2011

Health Insurance provider WellPoint announced that the Cedars-Sinai Samuel Oschin Comprehensive Cancer Institute in Los Angeles would provide clinical expertise to help shape WellPoint's new health care solutions utilizing IBM's Watson question answering system.

"It is estimated that new clinical research and medical information doubles every five years, and nowhere is this knowledge advancing more quickly than in the complex area of cancer care.  

"WellPoint believes oncology is one of the medical fields that could greatly benefit from this technology, given IBM Watson's ability to respond to inquiries posed in natural language and to learn from the responses it generates. The WellPoint health care solutions will draw from vast libraries of information including medical evidence-based scientific and health care data, and clinical insights from institutions like Cedars-Sinai. The goal is to assist physicians in evaluating evidence-based treatment options that can be delivered to the physician in a matter of seconds for assessment. WellPoint and Cedars-Sinai envision that this valuable enhancement to the decision-making process could empower physician-patient discussions about the best and most effective courses of treatment and improve the overall quality of patient care.  

"Cedars-Sinai was selected as WellPoint's partner based on its reputation as one of the nation's premier cancer institutions and its proven results in the diagnosis and treatment of complex cancers. Cedars-Sinai has experience and demonstrated success in working with technology innovators and shares WellPoint's commitment to improving the quality, efficiency and effectiveness of health care through innovation and technology.  

"Cedars-Sinai's oncology experts will help develop recommendations on appropriate clinical content for the WellPoint health care solutions. They will also assist in the evaluation and testing of the specific tools that WellPoint plans to develop for the oncology field utilizing IBM's Watson technology. The Cedars-Sinai cancer experts will enter hypothetical patient scenarios, evaluate the proposed treatment options generated by IBM Watson, and provide guidance on how to improve the content and utility of the treatment options provided to the physicians.  

"Leading Cedars-Sinai's efforts is M. William Audeh, M.D., medical director of its Samuel Oschin Comprehensive Cancer Institute. Dr. Audeh will work closely with WellPoint's clinical experts to provide advice on how the solutions may be best utilized in clinical practice to support increased understanding of the evolving body of knowledge in cancer, including emerging therapies not widely known by community physicians. As the solutions are developed, Dr. Audeh will also provide guidance on how the make the WellPoint offering useful and practical for physicians and patients.

" 'As we design the WellPoint systems that leverage IBM Watson's capabilities, it is essential that we incorporate the highly-specialized knowledge and real-life practice experiences of the nation's premier clinical experts,' said Harlan Levine, MD, executive vice president of WellPoint's Comprehensive Health Solutions. 'The contributions from Dr. Audeh, coupled with the expertise throughout Cedars-Sinai's Samuel Oschin Comprehensive Cancer Institute, will be invaluable to implementing this WellPoint offering and could ultimately benefit millions of Americans across the country.'

"WellPoint anticipates deploying their first offering next year, working with select physician groups in clinical pilots" (http://ir.wellpoint.com/phoenix.zhtml?c=130104&p=irol-newsArticle&ID=1640553&highlight=, accessed 12-17-2011).

View Map + Bookmark Entry

Burning of the Library of l'Institut de l'Egypte December 17, 2011

On December 17, 2011 demonstrators set fire to the l'Instut de l'Egypte in Cairo. This research center and library, founded by Napoleon in 1798 to carry out research during his Egyptian Campaign, contained some of the most valuable rare books and original research material in Egypt. 

"State news agency MENA said that firemen eventually managed to control it, but state TV reported that the fire damaged the whole building and all of its collections" (http://www.dp-news.com/en/detail.aspx?articleid=106385, accessed 12-18-2011).



Published: Dec 20, 2011 13:03 Updated: Dec 20, 2011 13:04:

"CAIRO: Volunteers in white lab coats, surgical gloves and masks were standing on the back of a pickup truck along the banks of the Nile River in Cairo, rummaging through stacks of rare 200-year-old manuscripts that were little more than charcoal debris.  

"The volunteers, ranging from academic experts to appalled citizens, have spent the past two days trying to salvage what's left of some 192,000 books, journals and writings, casualties of Egypt's latest bout of violence.

"Institute d'Egypte, a research center set up by Napoleon Bonaparte during France's invasion in the late 18th century, caught fire during clashes between protesters and Egypt's military over the weekend. It was home to a treasure trove of writings, most notably the handwritten 24-volume Description de l'Egypte, which began during the 1798-1801 French occupation.

"The compilation, which includes 20 years of observations by more than 150 French scholars and scientists, was one of the most comprehensive descriptions of Egypt's monuments, its ancient civilization and contemporary life at the time.

"The Description of Egypt is likely burned beyond repair. Its home, the two-story historic institute near Tahrir Square, is now in danger of collapsing after the roof caved in. 

"The burning of such a rich building means a large part of Egyptian history has ended," the director of the institute, Mohammed Al-Sharbouni, told state television over the weekend. The building was managed by a local non-governmental organization.

"Al-Sharbouni said most of the contents were destroyed in the fire that raged for more than 12 hours on Saturday. Firefighters flooded the building with water, adding to the damage.

"During the clashes a day earlier, parts of the parliament and a transportation authority office caught fire, but those blazes were put out quickly.  

"The violence erupted in Cairo Friday, when military forces guarding the Cabinet building, near the institute, cracked down on a 3-week-old sit-in to demand the country's ruling generals hand power to a civilian authority. At least 14 people have been killed.

"Zein Abdel-Hady, who runs the country's main library, is leading the effort to try and save what's left of the charred manuscripts.

" 'This is equal to the burning of Galileo's books,' Abdel-Hady said, referring to the Italian scientist whose work proposing that the earth revolved around the sun was believed to have been burned in protest in the 17th century.

"Below Abdel-Hady's office, dozens of people sifted through the mounds of debris brought to the library. A man in a surgical coat carried a pile of burned paper with his arms carefully spread, as if cradling a baby.

"The rescuers used newspapers to cover some partially burned books. Bulky machines vacuum-packed delicate paper.

"At least 16 truckloads with around 50,000 manuscripts, some damaged beyond repair, have been moved from the sidewalks outside the US Embassy and the American University in Cairo, both near the burned institute, to the main library, Abdel-Hady said.

"He told The Associated Press that there is no way of knowing what has been lost for good at this stage, but the material was worth tens of millions of dollars - and in many ways simply priceless.

" 'I haven't slept for two days, and I cried a lot yesterday. I do not like to see a book burned,' he said. 'The whole of Egypt is crying.'

"He said that there are four other handwritten copies of the Description of Egypt. The French body of work has also been digitized and is available online.

"There may have been a map of Egypt and Ethiopia, dated in 1753, that was destroyed in the fire. However, another original copy of the map is in Egypt's national library, he said. The gutted institute also housed 16th century letters and manuscripts that were bound and shelved like books.

"The most accessible inventory at the moment for what was housed in the institute is in a 1920's book kept in the US Library of Congress, according to William Kopycki, a regional field director with the Washington D.C.-based library. He said the body of work that was destroyed was essential for researchers of Egyptian history, Arabic studies and Egyptology.

" 'It's a loss of a very important institute that many scholars have visited,' he said during a meeting with Abdel-Hady to evaluate the level of destruction.

"What remains inside the historic building near the site of the clashes are piles of burned furniture, twisted metal and crumbled walls. A double human chain of protesters surrounded the building Monday.

"At a news conference Monday, a general from the country's ruling military council said an investigation was under way to find who set the building on fire. State television aired images of men in plainclothes burning the building and dancing around the fire Saturday afternoon. Protesters also took advantage of the fire, using the institute's grounds to hurl firebombs and rocks at soldiers atop surrounding buildings.

"Volunteer Ahmed El-Bindari said the military shoulders the brunt of responsibility for using its roof as a position to attack protesters before the fire erupted." 

View Map + Bookmark Entry

More than One Trillion Videos Were Played Back on YouTube in 2011 December 20, 2011

"In total, there were more than 1,000,000,000,000 (one trillion) playbacks on YouTube this year (yep, count ‘em, 12 zeroes). That’s about 140 views for every person on the earth. More than twice as many stars as in the Milky Way. And if I had a penny for every … OK, you get my drift, it’s a big number" (http://googleblog.blogspot.com/2011/12/what-were-we-watching-this-year-lets.html, accessed 12-20-2011).

View Map + Bookmark Entry

Sheikh Sultan Dr. Al-Qasimi Pledges to Restore the Library of l'Institut de l'Egypte December 20, 2011

Sheikh Sultan bin Mohammed Al-Qasimi III (In Arabic: سلطان بن محمد القاسمي) governor of the UAE’s emirate of Sharjah, and a widely published writer and scholar, pledged to restore the library of the Institut de l'Egypte damaged by fire, and to replace  books destroyed or damaged beyond repair. 

" 'All the original documents in my private library I am giving as a gift to the Egyptian Scientific Complex,' Qassemi said in a phone interview from Paris with the independent Egyptian satellite Channel Dream TV. 'I have a rare collection that is not to be found anywhere else.'Qassemi added that he asked for a complete list of all the books that were damaged or lost during the fire and that he would do his best to look for other original copies and give them to the library, known for its collection of priceless books, maps, and manuscripts.  

“ 'What is happening in Egypt is happening to all of us and what I am doing is just a small token of gratitude that all of us, especially people from Sharjah, feel.  Egyptian [institutions] taught us  a lot and we were students in Egyptian universities and no matter what we do, it will not be enough to pay them back,' he said.  

"Qassemi added that he is overseeing the construction of a documents center in Cairo to house all the documents that are now kept in the Egyptian cabinet building, a place seen as unsafe at the moment because of clashes in Tahrir Square and surrounding areas.  

“ 'We will make sure that the documents are safely transferred before more acts of sabotage take place. We have been given the green light by the Egyptian government to do that.

"He added that he would do his best to preserve Egypt’s heritage as Egypt had always preserved the Arab world.  

“ 'Egypt has always been offering sacrifices and we will never forget what Egyptians did to liberate Kuwait. This alone is invaluable,' Qassemi said.  

"The Egyptian minister of antiquities, Mohamed Ibrahim, said he appreciates Qassemi’s initiative.  

“ 'Sheikh Qassemi has always supported the library and Egypt.'

"Ibrahim added that the French government has also offered to salvage what it can from the Scientific Complex.  

"Among the documents in Qassemi’s possession is a copy of Description de l'Égypte, written at the time of the French expedition to Egypt (1798-1801) and published between 1809 and 1822. The book, which contains a detailed description of Egypt, was a main cause for the uproar that accompanied the fire at the Scientific Complex.  

"According to sources at the Egyptian Ministry of Culture, around 20,000 books and manuscripts were saved from the fire and are currently kept in the cabinet and parliament buildings"  (http://english.alarabiya.net/articles/2011/12/20/183601.html, accessed 12-20-2011).

View Map + Bookmark Entry