4406 entries. 94 themes. Last updated December 26, 2016.

2012 to 2016 Timeline


The Anatomy of an Internet Attack by "Anonymous" 2012

In 2012 the Internet security company Imperva published "Imperva's Hacker Intelligence Summary Report. The Anatomy of an Anonymous Attack."

"During 2011, Imperva witnessed an assault by the hacktivist group ‘Anonymous’ that lasted 25 days. Our observations give insightful information on Anonymous, including a detailed analysis of hacking methods, as well as an examination of how social media provides a communications platform for recruitment and attack coordination. Hacktivism has grown dramatically in the past year and has become a priority for security organizations worldwide. Understanding Anonymous’ attack methods will help organizations prepare if they are ever a target.

"Our observation of an Anonymous campaign reveals:

"› The process used by Anonymous to pick victims as well as recruit and use needed hacking talent.

"› How Anonymous leverages social networks to recruit members and promotes hack campaigns.

"› The specific cyber reconnaissance and attack methods used by Anonymous’ hackers. We detail and sequence the steps Anonymous hackers deploy that cause data breaches and bring down websites.

"Finally, we recommend key mitigation steps that organizations need to help protect against attacks."

View Map + Bookmark Entry

Surprisingly Active 21st Century Trade in Medieval Manuscript Books of Hours 2012

"During the Middle Ages, books of hours were more popular than any other text, even the Bible. These intricately illustrated devotional texts began to appear around 1250 and contained a series of psalms meant to be read at eight specific hours of the day, hence their name. Hymns, lessons, and biblical readings were rendered with varying degrees of color and ornamentation. The books were widely owned in Europe until their use was prohibited by the church in the 16th century. Today, says dealer Sam Fogg, of London, books of hours are 'almost the only way you can acquire medieval painting that looks like it was when it was new, with the colors still glowing and the gold still shines.'

“ 'People aren’t aware that you can just buy these things,' says Timothy Bolton, deputy director of Western medieval manuscripts at Sotheby’s London. Experts estimate that approximately 100 books of hours change hands every year at auction or through a dozen or so private dealers. 'Of all the types of manuscripts extant from the Middle Ages, books of hours are easiest to acquire, because so many remain in private hands' notes Sandra Hindman, owner of Les Enluminures, a gallery in Chicago, Paris, and, as of this month, New York, that specializes in the books" (http://artinfo.com/news/story/804319/illuminating-the-surprisingly-accessible-market-for-medieval-books-of-hours, accessed 05-15-2012).

View Map + Bookmark Entry

42,182,000 Copies Printed Semi-Monthly in 194 Languages January 2012

In January 2012 The Watchtower announced that it was printing "42,182,000 copies in 194 languages."  This represented a substantial increase over the number of printed copies produced and the number of languages represented in 2007.

In March 2012 www.watchtower.org was available in 438 languages.

View Map + Bookmark Entry

Gelatin and Calcium in the Earliest Paper Was Responsible for its Longevity January 2012

Research on Paper Though Time by a University of Iowa team led by Timothy Barrett, director of papermaking facilities at the UI Center for the Book, showed that the earliest paper tended to be the most durable over time because of high qualities of gelatin and calcium in its manufacture. Over three years the team analyzed 1,578 historical papers made between the 14th and the 19th centuries. Barrett and his colleagues devised methods to determine their chemical composition without requiring a sample to be destroyed in the process, which had limited past research.

“This is news to many of us in the fields of papermaking history and rare book and art conservation,” says Barrett. “The research results will impact the manufacture of modern paper intended for archival applications, and the care and conservation of historical works on paper.”  

Barrett says one possible explanation for the higher quality of the paper in the older samples is that papermakers at the time were attempting to compete with parchment, a tough enduring material normally made from animal skins. In doing so, they made their papers thick and white and dipped the finished sheets into a dilute warm gelatin solution to toughen it.  

“Calcium compounds were used in making parchment, and they were also used in making paper,” Barrett says. “Turns out they helped prevent the paper from becoming acidic, adding a lot to its longevity.”

View Map + Bookmark Entry

NYPL Labs Introduces the Stereogranimator January 2012

In January 2012 NYPL Labs, the digital library development division of the New York Public Library, introduced the Stereogranimator, a website and collaborative program to turn digital copies of analog stereographic photograph pairs into shareable 3D web formats.

"Stereographs, produced by the millions between the 1850s and the 1930s, were a wildly popular form of entertainment, giving viewers a taste of the kind of richly rounded images now readily available on screens of all sizes. No motion was involved, however. Instead, viewers looked through a stereoscope at two slightly different photographs of the same scene, which the brain was tricked into perceiving as a single three-dimensional image.

"The Stereogranimator . . . uses GIF animation to create the illusion of three-dimensionality by flickering back and forth between the two images. Users can adjust the speed, as well as the spatial jump between the images. The tool also generates an old-fashioned anaglyph, one of those blurry, two-toned images that snap into rounded focus when viewed through a stereoscope or vintage blue-red 3-D glasses. . . ." (http://artsbeat.blogs.nytimes.com/2012/01/26/3-d-it-yourself-thanks-to-new-library-site/, accessed 11-02-2013).

The Stereogranimator grew out of a project originated by writer / photographer Joshua Heineman, who in 2008 observed that

"The parallax effect of minor changes between the two perspectives created a sustained sense of dimension that approximated the effect of stereo viewing. When I realized how the effect was working, I set about discovering if I could capture the same illusion by layering both sides of an old stereograph in Photoshop & displaying the result as an animated gif. The effect was more jarring than through a stereoscope but no less magic" (http://stereo.nypl.org/about, accessed 11-02-2013).

View Map + Bookmark Entry

Sales of eBook Readers in 2011 January 5, 2012

"In 2011, manufacturers shipped about 30 million e-book readers over all, up 108 percent from 2010. . . .  

"Then in 2015, the reader market will shrink to 38 million, presumably because consumers will be attracted to tablets.  

"Sales of touch-screen tablets have continued to be strong. Apple’s iPad shipped upward of 40 million units in 2011 alone, according to estimates by Forrester, a research firm.  

"Amazon and Barnes & Noble are blurring the lines between e-readers and tablets with the Kindle Fire and the Nook Tablet. Forrester Research estimates that in the fourth quarter of 2011, Amazon shipped about five million units of the Kindle Fire and Barnes & Noble shipped about two million Tablets" (http://www.nytimes.com/2012/01/06/technology/nook-from-barnes-noble-gains-more-e-book-readers.html?src=rechp, accessed 01-05-2012).

View Map + Bookmark Entry

Transforming Google into a Search Engine that Understands Not Only Content but People and Relationships January 10, 2012

"We’re transforming Google into a search engine that understands not only content, but also people and relationships. We began this transformation with Social Search, and today we’re taking another big step in this direction by introducing three new features:  

"1. Personal Results, which enable you to find information just for you, such as Google+ photos and posts—both your own and those shared specifically with you, that only you will be able to see on your results page;  

"2. Profiles in Search, both in autocomplete and results, which enable you to immediately find people you’re close to or might be interested in following; and, 

"3. People and Pages, which help you find people profiles and Google+ pages related to a specific topic or area of interest, and enable you to follow them with just a few clicks. Because behind most every query is a community. 

"Together, these features combine to create Search plus Your World. Search is simply better with your world in it, and we’re just getting started" (http://googleblog.blogspot.com/2012/01/search-plus-your-world.html, accessed 01-11-2010).

View Map + Bookmark Entry

The Cost of Sequencing a Human Genome Drops to $1000 January 10, 2012

On January 10, 2012 Jonathan M. Rothberg, CEO of Guilford, Connecticut-based biotech company Ion Torrent, announced a new tabletop sequencer called the Ion Proton. The company introduced the device at the Consumer Electronics Show in Las Vegas on January 10. At $149,000, the new machine was about three times the price of the Personal Genome Machine, the sequencer that the company debuted about a year ago. But the DNA-reading chip inside it was 1,000 times more powerful, according to Rothberg, allowing the device to sequence an entire human genome in a day for $1,000—a price the biotech industry has been working toward for years because it would bring the cost down to the level of a medical test.

'The technology got better faster than we ever imagined,'Rothberg says. 'We made a lot of progress on the chemistry and software, then developed a new series of chips from a new foundry.' The result is a technology progression that has moved faster than Moore's law, which predicts that microchips will double in power roughly every two years.

"Ion Torrent's semiconductor-based approach for sequencing DNA is unique. Currently, optics-based sequencers, primarily from Illumina, a San Diego-based company, dominate the human genomics field. But, while the optics-based sequencers are generally considered more accurate, these machines cost upwards of $500,000, putting them out of reach for most clinicians. Meanwhile, at Ion Torrent's price, "you can imagine one in every doctor's office," says Richard Gibbs, director of Baylor College of Medicine's human genome sequencing center in Houston, which will be among the first research centers to receive a Proton sequencer.  

"The new Ion Torrent sequencer will also allow researchers to buy a chip that sequences only exons, the regions of the genome that encode proteins. Exons only account for about 5 percent of the human genome, according to the National Human Genome Research Institute, but they are where most disease-causing mutations occur, making so-called exome sequencing a faster and potentially cheaper option for many researchers. Although it's the same price as the genome chip, the Ion Torrent exome chip can sequence two exomes at a time, bringing the per-sequence cost down to $500.  

" 'Some researchers want to sequence single genes, others want to do exomes, and others—for example, cancer researchers—will want to sequence whole genomes, so all three are going to coexist,' says Rothberg. 'It's about finding the right tool for the problem.'  

"Whether Ion Torrent's new technology will be enough to make it the dominant supplier of these tools remains to be seen. A day after the company debuted the Proton sequencer, Illumina also announced that it, too, had reached the $1,000 genome milestone" (http://www.technologyreview.com/biomedicine/39458/?nlid=nldly&nld=2012-01-13, accessed 01-13-2013).

View Map + Bookmark Entry

The Smallest Magnetic Data Storage Unit Uses Just 12 Atoms per Bit January 13, 2012

Sebastian Loth, Susanne Baumann, Christopher P. Lutz, D. M. Eigler, Andreas J. Heinrich, all of whom are affiliated with IBM Research- Alamaden, San Jose, CA, and some of whom are afilliated with the Max Planck Research Group-Dynamics of Naonelectronic Systems, and the Department of Physics, University of Basel, published "Bistability in Atomic-Scale Antiferromagnets," Science, Vol. 335, no. 6065, 196-199. DOI: 10.1126/science.1214131

The authors built the world's smallest magnetic data storage unit, which uses just twelve atoms per bit, the basic unit of information, and squeezes a whole byte (8 bits) into as few as 96 atoms. For comparison, in 2012 a hard drive uses more than half a billion atoms per byte.

"The nanometre data storage unit was built atom by atom with the help of a scanning tunneling microscope (STM) at IBM's Almaden Research Center in San Jose, California. The researchers constructed regular patterns of iron atoms, aligning them in rows of six atoms each. Two rows are sufficient to store one bit. A byte correspondingly consists of eight pairs of atom rows. It uses only an area of 4 by 16 nanometres (a nanometre being a millionth of a millimetre). 'This corresponds to a storage density that is a hundred times higher compared to a modern hard drive,' explains Sebastian Loth of CFEL, lead author of the "Science" paper.  

"Data are written into and read out from the nano storage unit with the help of an STM. The pairs of atom rows have two possible magnetic states, representing the two values '0' and '1' of a classical bit. An electric pulse from the STM tip flips the magnetic configuration from one to the other. A weaker pulse allows to read out the configuration, although the nano magnets are currently only stable at a frosty temperature of minus 268 degrees Centigrade (5 Kelvin). 'Our work goes far beyond current data storage technology,' says Loth. The researchers expect arrays of some 200 atoms to be stable at room temperature. Still it will take some time before atomic magnets can be used in data storage.

First antiferromagnetic data storage  

"For the first time, the researchers have managed to employ a special form of magnetism for data storage purposes, called antiferromagnetism. Different from ferromagnetism, which is used in conventional hard drives, the spins of neighbouring atoms within antiferromagnetic material are oppositely aligned, rendering the material magnetically neutral on a bulk level. This means that antiferromagnetic atom rows can be spaced much more closely without magnetically interfering with each other. Thus, the scientist managed to pack bits only one nanometre apart" (http://www.desy.de/information__services/press/pressreleases/@@news-view?id=2141&lang=eng, accessed 01-12-2012)

View Map + Bookmark Entry

Slides of Fossils Collected by Darwin on the Beagle are Rediscovered January 17, 2012

British paleontologist Howard Falcon-Lang of the University of London rediscovered a "treasure trove" of microscopic slides of fossils, including some collected by Charles Darwin, in an old cabinet in the British Geological Survery. The fossils, which were lost, or perhaps more accurately, hidden and forgotten, for 165 years, were part of a slide collection assembled by British botanist and evolutionist Joseph Dalton Hooker, who was Darwin's best friend. The slides were photographed and made available through an online museum exhibit. This was among the most significant historical discoveries ever made of primary source material concerning Charles Darwin and the Voyage of the Beagle.

"Falcon-Lang's find was a collection of 314 slides of specimens collected by Darwin and other members of his inner circle, including John [sic] Hooker — a botanist and dear friend of Darwin — and the Rev. John Henslow, Darwin's mentor at Cambridge, whose daughter later married Hooker.  

"The first slide pulled out of the dusty corner at the British Geological Survey turned out to be one of the specimens collected by Darwin during his famous expedition on the HMS Beagle, which changed the young Cambridge graduate's career and laid the foundation for his subsequent work on evolution.  

"Falcon-Lang said the unearthed fossils — lost for 165 years — show there is more to learn from a period of history scientists thought they knew well.  

" 'To find a treasure trove of lost Darwin specimens from the Beagle voyage is just extraordinary,' Falcon-Lang added. 'We can see there's more to learn. There are a lot of very, very significant fossils in there that we didn't know existed.' He said one of the most 'bizarre' slides came from Hooker's collection — a specimen of prototaxites, a 400 million-year-old tree-sized fungus.  

"Hooker had assembled the collection of slides while briefly working for the British Geological Survey in 1846, according to Royal Holloway, University of London.  

"The slides — 'stunning works of art,' according to Falcon-Lang — contain bits of fossil wood and plants ground into thin sheets and affixed to glass in order to be studied under microscopes. Some of the slides are half a foot long (15 centimeters), 'great big chunks of glass,' Falcon-Lang said.  

" 'How these things got overlooked for so long is a bit of a mystery itself,' he mused, speculating that perhaps it was because Darwin was not widely known in 1846 so the collection might not have been given 'the proper curatorial care.'  

"Royal Holloway, University of London said the fossils were 'lost' because Hooker failed to number them in the formal 'specimen register' before setting out on an expedition to the Himalayas. In 1851, the 'unregistered' fossils were moved to the Museum of Practical Geology in Piccadilly before being transferred to the South Kensington's Geological Museum in 1935 and then to the British Geological Survey's headquarters near Nottingham 50 years later, the university said" (http://www.nytimes.com/aponline/2012/01/17/world/europe/AP-EU-Britain-Darwin-Fossils.html?scp=1&sq=darwin+slides&st=nyt, accessed 01-19-2012).

View Map + Bookmark Entry

Major Websites Go Dark to Protest Web Censorship Legislation January 17, 2012

On January 17, 2012 Wikipedia went down and WordPress was dark to protest the potential passage of two bills under consideration by the U.S. Congress. The bills were known as the PROTECT IP Act (PIPA) in the Senate and the Stop Online Piracy Act (SOPA) in the House.

According to the Official Google Blog:

"♦ PIPA & SOPA will censor the web. These bills would grant new powers to law enforcement to filter the Internet and block access to tools to get around those filters. We know from experience that these powers are on the wish list of oppressive regimes throughout the world. SOPA and PIPA also eliminate due process. They provide incentives for American companies to shut down, block access to and stop servicing U.S. and foreign websites that copyright and trademark owners allege are illegal without any due process or ability of a wrongfully targeted website to seek restitution.

" ♦ PIPA & SOPA will risk our industry’s track record of innovation and job creation. These bills would make it easier to sue law-abiding U.S. companies. Law-abiding payment processors and Internet advertising services can be subject to these private rights of action. SOPA and PIPA would also create harmful (and uncertain) technology mandates on U.S. Internet companies, as federal judges second-guess technological measures used by these companies to stop bad actors, and potentially impose inconsistent injunctions on them.

" ♦ PIPA & SOPA will not stop piracy. These bills wouldn’t get rid of pirate sites. Pirate sites would just change their addresses in order to continue their criminal activities. There are better ways to address piracy than to ask U.S. companies to censor the Internet. The foreign rogue sites are in it for the money, and we believe the best way to shut them down is to cut off their sources of funding. As a result, Google supports alternative approaches like the OPEN Act" (http://googleblog.blogspot.com/2012/01/dont-censor-web.html, accessed 01-19-2012)

View Map + Bookmark Entry

Apple Introduces iBooks 2, iBooks Author, and iTunes U January 19, 2012

Apple released iBooks 2, a free app to support digital textbooks that could display interactive diagrams, audio and video. At a news conference at the Guggenheim Museum in New York the company demonstrated a biology textbook featuring 3-D models, searchable text, photo galleries and flash cards for studying. Apple said high school textbooks from its initial publishing partners, including Pearson, McGraw-Hill and Houghton Mifflin Harcourt, would cost $15 or less.  

"Apple also announced a free tool called iBooks Author, a piece of Macintosh software that allows people to make these interactive textbooks. The tool includes templates designed by Apple, which publishers and authors can customize to suit their content. It requires no programming knowledge and will be available Thursday. 

"The company also unveiled the iTunes U app for the iPad, which allows teachers to build an interactive syllabus for their coursework. Students can load the syllabus in iTunes U and, for example, tap to open an electronic textbook and go directly to the assigned chapter. Teachers can use iTunes U to create full online courses with podcasts, video, documents and books" (http://bits.blogs.nytimes.com/2012/01/19/apple-unveils-tools-for-digital-textbooks/?nl=technology&emc=cta4, accessed 01-19-2012). 

View Map + Bookmark Entry

Discovery of the Afghan Genizah January 23, 2012

On January 23, 2012 Reuters reported from Kabul that a cache of ancient Jewish scrolls were discovered in Samangan Province of northern Afghanistan. Written in Hebrew, Aramaic, Judeo-Arabic and Judeo-Persian – the Persian and Afghan Jews' long lost equivalent of Yiddish, which was written in Hebrew letters, the manuscripts are the first physical evidence of a Jewish community in Afghanistan a millenium ago.

"The 150 or so documents, dated from the 11th century, were found in Afghanistan's Samangan province and most likely smuggled out -- a sorry but common fate for the impoverished and war-torn country's antiquities.  

"Israeli emeritus professor Shaul Shaked, who has examined some of the poems, commercial records and judicial agreements that make up the treasure, said while the existence of ancient Afghan Jewry is known, their culture was still a mystery.  'Here, for the first time, we see evidence and we can actually study the writings of this Jewish community. It's very exciting,' Shaked told Reuters by telephone from Israel, where he teaches at the Comparative Religion and Iranian Studies department at the Hebrew University of Jerusalem.  

"The hoard is currently being kept by private antique dealers in London, who have been producing a trickle of new documents over the past two years, which is when Shaked believes they were found and pirated out of Afghanistan in a clandestine operation.  

"It is likely they belonged to Jewish merchants on the Silk Road running across Central Asia, said T. Michael Law, a British Academy Postdoctoral Fellow at Oxford University's Center for Hebrew and Jewish Studies.  

"They might have been left there by merchants travelling along the way, but they could also come from another nearby area and deposited for a reason we do not yet understand,' Law said" (http://www.reuters.com/article/2012/01/23/us-afghanistan-jewish-scrolls-idUSTRE80M18W20120123, accessed 01-03-2013).

In March 2012 it was reported that approximately 200 documents had been found from the Afghan Genizah (Geniza), the most significant find of Hebrew manuscripts since the Cairo Genizah, discovered late in the nineteenth century.

On January 3, 2013 The National Library of Israel in Jerusalem announced that after long negotiations with antiquities dealers it had purchased 29 manuscripts from the Afghan Geniza "out of the hundreds that are said to be available."

View Map + Bookmark Entry

Technological Unemployment: Are Robots Replacing Workers? January 23, 2012 – January 13, 2013

On January 23, 2012 Erik Brynjolfsson and Andrew McAfee of MIT issued Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy

Drawing on research by their team at the Center for Digital Business at MIT, the authors show that digital technologies are rapidly encroaching on skills that used to belong to humans alone.

"This phenomenon is both broad and deep, and has profound economic implications. Many of these implications are positive; digital innovation increases productivity, reduces prices (sometimes to zero), and grows the overall economic pie.

"But digital innovation has also changed how the economic pie is distributed, and here the news is not good for the median worker. As technology races ahead, it can leave many people behind. Workers whose skills have been mastered by computers have less to offer the job market, and see their wages and prospects shrink. Entrepreneurial business models, new organizational structures and different institutions are needed to ensure that the average worker is not left behind by cutting-edge machines.

"In Race Against the Machine Brynjolfsson and McAfee bring together a range of statistics, examples, and arguments to show that technological progress is accelerating, and that this trend has deep consequences for skills, wages, and jobs. The book makes the case that employment prospects are grim for many today not because there's been technology has stagnated, but instead because we humans and our organizations aren't keeping up."

About a year later, on January 13, 2013, CBS television's 60 Minutes broadcast a report on automation in the workplace taking the viewpoint expressed in Brynjolfsson and McAfee's book entitled "Are robots hurting job growth?" (accessed 01-27-2013).

Following up on the issue, on January 23, 2013 John Markoff published an article in The New York Times entitled "Robot Makers Spread Global Gospel of Automation." Markoff reported that Henrik I. Christensen, the Kuka Chair of Robotics at Georgia Institue of Technology's College of Computing was highly critical of the 60 Minutes report.

"During his talk, Dr. Christensen said that the evidence indicated that the opposite was true. While automation may transform the work force and eliminate certain jobs, it also creates new kinds of jobs that are generally better paying and that require higher-skilled workers.

" 'We see today that the U.S. is still the biggest manufacturing country in terms of dollar value,' Dr. Christensen said. 'It’s also important to remember that manufacturing produces more jobs in associated areas than anything else.'

"An official of the International Federation of Robotics acknowledged that the automation debate had sprung back to life in the United States, but he said that America was alone in its anxiety over robots and automation.

 'This is not happening in either Europe or Japan,' said Andreas Bauer, chairman of the federation’s industrial robot suppliers group and an executive at Kuka Robotics, a German robot maker.

"To buttress its claim that automation is not a job killer but instead a way for the United States to compete against increasingly advanced foreign competitors, the industry group reported findings on Tuesday that it said it would publish in February. The federation said the industry would directly and indirectly create from 1.9 million to 3.5 million jobs globally by 2020.

"The federation held a news media event at which two chief executives of small American manufacturers described how they had been able to both increase employment and compete against foreign companies by relying heavily on automation and robots.

“ 'Automation has allowed us to compete on a global basis. It has absolutely created jobs in southwest Michigan,' said Matt Tyler, chief executive of Vickers Engineering, an auto parts supplier. 'Had it not been for automation, we would not have beat our Japanese competitor; we would not have beat our Chinese competitor; we would not have beat our Mexican competitor. It’s a fact.'

Also making the case was Drew Greenblatt, the widely quoted president and owner of Marlin Steel, a Baltimore manufacturer of steel products that has managed to expand and add jobs by deploying robots and other machines to increase worker productivity.

“ 'In December, we won a job from a Chicago company that for over a decade has bought from China,' he said. 'It’s a sheet-metal bracket; 160,000 sheet-metal brackets, year in, year out. They were made in China, now they’re made in Baltimore, using steel from a plant in Indiana and the robot was made in Connecticut.'

"A German robotics engineer argued that automation was essential to preserve jobs and also vital to make it possible for national economies to support social programs.

“ 'Countries that have high productivity can afford to have a good social system and a good health system,' said Alexander Verl, head of the Fraunhofer Institute for Manufacturing Engineering in Germany. “You see that to some extent in Germany or in Sweden. These are countries that are highly automated but at the same time they spend money on elderly care and the health system.'

"In the report presented Tuesday by the federation, the United States lags Germany, South Korea and Japan in the density of manufacturing robots employed (measured as the number of robots per 10,000 human workers). South Korea, in particular, sharply increased its robot-to-worker ratio in the last three years and Germany has twice the robot density as the United States, according to a presentation made by John Dulchinos, a board member of the Robot Industries Association and the chief executive of Adept Technology, a Pleasanton, Calif., maker of robots. 

"The report indicates that although China and Brazil are increasing the number of robots in their factories, they still trail the advanced manufacturing countries.  

"Mr. Dulchinos said that the United States had only itself to blame for the decline of its manufacturing sector in the last decade.

“ 'I can tell you that in the late 1990s my company’s biggest segment was the cellular phone market,' he said. 'Almost overnight that industry went away, in part because we didn’t do as good a job as was required to make that industry competitive.'

"He said that if American robots had been more advanced it would have been possible for those companies to maintain the lowest cost of production in the United States.  

“ 'They got all packed up and shipped to China,' Mr. Dulchinos said. 'And so you fast-forward to today and there are over a billion cellphones produced a year and not a single one is produced in the United States.'

"Yet, in the face of growing anxiety about the effects of automation on the economy, there were a number of bright spots. The industry is now generating $25 billion in annual revenue. The federation expects 1.6 million robots to be produced each year by 2015."

View Map + Bookmark Entry

Creative Destruction of the Book Trade by Amazon? February 8, 2012

" 'If you want a picture of the future, imagine a boot stamping on a human face — forever,' George Orwell wrote in 'Nineteen Eighty-Four.' In 'Animal Farm,' he concluded that revolutions are inevitably betrayed by their leaders. His novel 'Burmese Days' ends with the hero killing himself because he is unfit to live in this sour world. He shoots his dog too.  

"As a rule, modern civilization disappointed Orwell when it did not actually sicken him. But in at least one respect he was way too optimistic. Bookselling, he wrote in Fortnightly in November 1936, 'is a humane trade which is not capable of being vulgarized beyond a certain point. The combines can never squeeze the small independent bookseller out of existence as they have squeezed the grocer and the milkman.'

"Jump forward three-quarters of a century, and a certain Seattle-based combine is being accused of exactly that. All sorts of merchants, but particularly booksellers, were infuriated by Amazon’s effort before the holidays to use shops on Main Street and in malls as showrooms for people to check out items before ordering them more cheaply online. The retailer’s refusal to collect sales tax is a persistent grievance. Independent booksellers have even been forced into the novel position of hoping that their one-time foe, Barnes & Noble, survives so that it can serve as a bulwark against Amazon. Publishers, if anything, are more fearful than booksellers.  

"Now take a look at the cover of Bloomberg Businessweek two weeks ago. It shows a book in flames with the headline, 'Amazon wants to burn the book business.' What was remarkable was not just the overt Nazi iconography but the fact that it did not cause any particular uproar. In the struggle over the future of intellectual commerce in the United States, apparently even evocations of Joseph Goebbels and the Brown Shirts are considered fair game.  

"From Amazon’s point of view, the cover is incorrect even if you disregard any Nazi connotations. What would be the use to Amazon of a charred hulk? It does not want to destroy the book business, but simply to reinvent it — or, as its opponents would have it, seize control of it. (Amazon declined to comment)" (http://bits.blogs.nytimes.com/2012/02/08/amazon-up-in-flames/?hp, accessed 02-08-2012).

View Map + Bookmark Entry

The ILAB Launches a Mobil App March 2012

The International League of Antiquarian Booksellers (ILAB) launched a mobil app for iPhone and Android.

View Map + Bookmark Entry

After Digitizing Over 20 Million Books Expansion of the Google Books Project Begins to Slow March 9, 2012

"Google has been quietly slowing down its book-scanning work with partner libraries, according to librarians involved with the vast Google Books digitization project. But what that means for the company's long-term investment in the work remains unclear.  Google was not willing to say much about its plans. 'We've digitized more than 20 million books to date and continue to scan books with our library partners,' a Google spokeswoman told The Chronicle in an e-mailed statement.  

"Librarians at several of Google's partner institutions, including the University of Michigan and the University of Wisconsin systems, confirmed that the pace has slowed. 'They're still scanning. They're scanning at a lower rate than the peak,' said Paul N. Courant, Michigan's dean of libraries.  

"At Wisconsin, the scanning pace is 'something less than half of what it was' in 2006, the year the work started there, said Edward V. Van Gemert, the university's interim director of libraries.  

"Wisconsin's agreement with Google stipulated that the scanning would continue for at least six years or until half a million works had been digitized. 'We anticipated this slowdown,' he said (The Chronicle of Higher Education, March 9, 2012, accessed 01-27-2013).

View Map + Bookmark Entry

The Encyclopedia Britannica Ends Print Publication March 14, 2012

Jorge Cauz, president of Encyclopaedia Britannica, Inc. in Chicago announced that after 244 years of print publication the 2010 edition of the Encyclopaedia Britannica, in 32 printed volumes, containing 44 million words, and weighing 129 pounds, would be the last printed edition. 

"The oldest continuously published encyclopedia in the English language, the Encyclopaedia Britannica has become a luxury item with a $1,395 price tag; it is typically purchased by embassies and well-educated, upscale consumers who feel an attachment to the set of bound volumes. Only 8,000 sets of the 2010 edition have been sold, and the remaining 4,000 have been stored in a warehouse until they can be purchased."

"Sales of Encyclopaedia Britannica peaked in 1990, when 120,000 sets were sold in the United States. But now print encyclopedias account for less than 1 percent of Encyclopaedia Britannica’s revenues. About 85 percent of revenues come from selling curriculum products in subjects like math, science and the English language; the remainder comes from subscriptions to the Web site, the company said.  

"About half a million households pay a $70 annual fee that includes access to the full database of articles, videos, original documents and access to mobile applications. A selection of articles is also available free on the Web site, said Peter Duckler, a spokesman for Britannica" (http://mediadecoder.blogs.nytimes.com/2012/03/13/after-244-years-encyclopaedia-britannica-stops-the-presses/?hp, accessed 03-13-2012).

View Map + Bookmark Entry

Nearly 50% of U.S. Mobile Subscribers Own Smartphones March 29, 2012

According to a Nielsen report accessed on March 29, 2012, 49.7 percent of mobile subscribers owned smartphones as of February, 2012, an increase from 36 percent a year ago. Two-thirds of those who got a new phone in the last three months chose a smartphone over a feature phone.  Android-based phones led the U.S. smartphone market with a 48 percent share,  Apple's iPhone had 32 percent, and BlackBerry had 11.6 percent.


http://www.technolog.msnbc.msn.com/technology/technolog/half-us-cellular-subscribers-own-smartphones-nielsen-586757, accessed 03-29-2012.

View Map + Bookmark Entry

U.S. Justice Department Sues Major Publishers Over the Pricing of eBooks; Amazon Wins April 12, 2012

“ 'Amazon must be unbelievably happy today,' said Michael Norris, a book publishing analyst with Simba Information. 'Had they been puppeteering this whole play, it could not have worked out better for them.'

"Amazon, which already controls about 60 percent of the e-book market, can take a loss on every book it sells to gain market share for its Kindle devices. When it has enough competitive advantage, it can dictate its own terms, something publishers say is beginning to happen.  

"The online retailer declined to comment Wednesday beyond its statement about lowering prices. Asked last month if Amazon had been talking to the Justice Department about the investigation — a matter of intense speculation in the publishing industry — a spokesman, Craig Berman, said, 'I can’t comment.'  

"Traditional bookstores, which have been under pressure from the Internet for years, fear that the price gap between the physical books they sell and e-books from Amazon will now grow so wide they will lose what is left of their market. Barnes & Noble stores, whose Nook is one of the few popular e-readers that is not built by Amazon, could suffer the same fate, analysts say.  

“ 'To stay healthy, this industry needs a lot of retailers that have a stake in the future of the product,' Mr. Norris said. 'The bookstore up the street from my office is not trying to gain market share. They’re trying to make money by selling one book at a time to one person at a time.'

"Electronic books have been around for more than a decade, but took off only when Amazon introduced the first Kindle e-reader in 2007. It immediately built a commanding lead. The antitrust case had its origins in the leading publishers’ struggle to control the power of Amazon, which had one point had 90 percent of the market.  

"Apple’s introduction of the iPad in early 2010 seemed to offer a way to combat Amazon" (http://www.nytimes.com/2012/04/12/business/media/amazon-to-cut-e-book-prices-shaking-rivals.html?_r=1&hp, accessed 04-12-2012).

View Map + Bookmark Entry

The First Pulitzer Prize Awarded to an Internet-Only Publication April 16, 2012

Columbia University announced that the 96th annual Pulitzer Prize "For a distinguished example of reporting on national affairs, using any available journalistic tool" was awarded to Huffington Post reporter David Wood "for his riveting exploration of the physical and emotional challenges facing American soldiers severely wounded in Iraq and Afghanistan during a decade of war."

Wood's series, Beyond the Battlefield, was characterized by the Huffington Post as "an exploration of the physical and emotional challenges, victories and setbacks that catastrophically wounded soldiers encounter after returning home."

"In recent years, the Pulitzer board has bestowed honors on newer outlets, such as ProPublica, a nonprofit newsroom that often teams up with established news organizations, and PolitiFact, a project of the Tampa Bay Times. Politico, a five-year-old newspaper and web site, took home its first Pulitzer prize Monday for Matt Wuerker's editorial cartoons. Still, a win in national reporting by an online-only news site is a departure from the typical list of legacy news outlets who clean up at the Pulitzers year after year" (http://www.huffingtonpost.com/2012/04/16/huffington-post-pulitzer-prize-2012_n_1429169.html, accessed 04-18-2012).


View Map + Bookmark Entry

Improving the Research Potential of ESTC April 17, 2012

Mr Brian Geiger

Center for Bibliographical Studies & Research
University of California, Riverside
Highlander Hall, Building B, Room 114
Riverside, California 92507

17 April 2012

Dear Mr Geiger

Improving the research potential of ESTC: consultation: A modest suggestion by William St Clair

I welcome the opportunity extended through SHARP L to offer suggestions for making ESTC more useful for researchers in the 21st century. Much of future research will, we can be confident, take the form of quantitative analysis, not necessarily just for checking of historical records, but for identifying trends, trying to recover the reading of the past, generating explanatory models and so on. The current suggestions for improving the interrogability of the present resource are to be welcomed.

However, I suggest, if we want to make the ESTC a research tool for the more ambitious questions, as is the stated aim, then in my view, the proposed changes will make only a limited contribution. Although in the English-speaking world, we have excellent catalogues and bibliographies, to my mind, the empirical factual basis on which those who attempt to address ambitious questions are reliant is seriously inadequate. Indeed, I suggest, the extent of the present inadequacy of data would not be tolerated by those familiar with the standards applicable and expected in the sciences and social sciences.

The biggest weakness for anyone attempting a history of books, or of the book industry, or of reading, is that 'titles' is a poor measure of book production. What we need, for a start, if we want to map the material extent of past production, are figures for print runs and sales, and also for price, as a good indicator of potential access, plus explanatory economic models for the various governing regimes [guild system, perpetual copyright, pirate and offshore, and so on]. We also need to develop formal ways of recognising and offsetting the inadequacies of the patchy archival record. Draft proposals for an ambitious project that would enable this kind of step change improvement in the data to be made have been prepared. If they are proceeded with, it will however take time before results become available and we see the benefits.

However, there is a notable weakness in the current situation that can be easily addressed and remedied, and that would be a helpful step in the right direction of making ESTC a more useful research tool. ESTC should, I suggest, consider the suggestion that I have made in print and in lectures on a number of occasions, most fully in my chapter in The Cambridge History of the Book in Britain, volume 6.

ESTC is, in a way, a victim of its own success. For the ready availability of lists of titles has fostered an illusion of completeness and that has led to users attempting to use it for purposes for which it is inadequate. Indeed the consultation document that has been circulated helps to prolong the illusion. I quote: 'The English Short Title Catalogue (ESTC) is a union catalog and bibliography of everything printed between 1473 and 1800 in England and its former colonies or in the English language elsewhere.' In fact, as I understand the situation, ESTC is a union catalogue of titles of which at least one copy is known to have survived somewhere in the world?

That is not a debating point. It has long been known that the survival rate of books and print from the early centuries is badly incomplete. And this is not just a general common sense understanding. We have good evidence of the large scale of the losses. D. F. McKenzie’s observation that the size of the English printing industry, as measured by the physical capital (presses) and personnel employed (apprentices and printers) scarcely changed between the mid sixteenth and late seventeenth century can only be squared with the sharp rise in surviving titles over the same period by postulating either that a high proportion of the industry was maintained in unemployment or that more output occurred than has survived, or some combination. [In CHBB, iv,17]. Since, until the early eighteenth century, the English state attempted to control the texts that were permitted to circulate within its jurisdiction not only by an array of direct textual controls but also by limiting the capacity of the industry, measured not by titles but by numbers of printed sheets, it is highly unlikely that a huge proportion of industrial capacity was kept in idleness or in reserve.

And we know the titles of many of these lost printed books. It has been known, at least since 1875, that the Stationers’ Company register included only a proportion of titles of which copies survive. [E Arber, A Transcript of the Registers of the Company of Stationers of London 1554-1640 volume 1]. The finding in my book The Reading Nation in the Romantic Period, 2006, 74-75, and 495-496, not challenged as far as I know, that large numbers of abridged ‘ballad versions’ of biblical stories were officially permitted until a sudden stop around 1600 depended upon my taking account of these lost, but registered, pieces of printed literature by scrutinising the registers myself. This result, and there are others, could not have emerged by interrogating ESTC nor would the current proposals to improve interrogability help. It is simply inadequate.

And it is not only in the early centuries of print that lists of titles known to survive are inadequate. For the eighteenth century, the survival rate of books known to have been produced, for example by being listed in advertisements, looks good for expensive books, patchy for some genres such as novels, and extremely poor for cheaper print. [Especially the two Dicey catalogues. Discussed in Reading Nation 340, and there appear to have been more cheap reprints of titles that entered the newly created public domain in the period after 1774 than are listed]. Soon after the 1800, the cut off date of ESTC, when we move to the age of stereotyping, 'titles' is such a poor indicator of production as to be of little value, and the survival rate for cheaper print known to have existed is even more poor.  ESTC cannot be held responsible if others misuse the information. But already it is used is to produce bogus statistics, even for titles, even for trends. Indeed, in some books and articles, the software that enables graphs, pie charts, bar charts and so on to be easily produced has given a spurious pseudo-scientific plausibility to results whose factual basis is simply not able to support them.  "There are no conceptual or methodological problems in my modest proposal for including lost books in ESTC. Provided the results can be aggregated with ESTC, they could be kept separate. The resources needed are largely clerical, potentially realisable by crowd sourcing. The potential improvement in the quality of research would, I suggest, be highly cost effective.

Yours sincerely
William St Clair

(source, accessed 04-23-2012).

View Map + Bookmark Entry

"Companies that have existed for centuries could be gone in a generation unless they make a single radical change." April 18, 2012

"Publishers who want to stay in business are going to have to start selling books without digital rights management, says science fiction author Charlie Stross. DRM locks customers into individual ebookstores and devices, which is the primary way that Amazon perpetuates its stranglehold on this market.

"For AMZN, the big six insistence on DRM on ebooks was a windfall: it made the huge investment in the Kindle platform worthwhile, and by 2010 Amazon had come close to an 85% market share in the ebook sector (which was growing at a dizzying compound rate of 100-200% per annum, albeit from a small base). And now we get to 2012, and ebooks are likely to hit 40% of total publishing sales by the end of this year, and are on the way to 60% within five years (per Tim Hely Hutchinson, CEO of Hachette UK). In five years, we've gone from <1% to >40%. That's disruption for you!  

"As an author, it's theoretically in Stross's interest to maintain the DRM business as usual. But he argues, and I think recent history is on his side, that "the real driver for piracy is the lack of convenient access to desirable content at a competitive price."  

" As one of only two publishers who have decided not to settle with the DOJ in its case against them (the other is Penguin), Macmillan is now in a unique position: It's a large, profitable company that is willing to experiment, but also inherently conservative precisely because of its success.  

"I single out Macmillan because -- full disclosure -- I collaborated on new projects with the then-head of Macmillan US when I was an editor at Scientific American. It strikes me as exactly the sort of organization that is teetering on the edge of being the first do do the radical thing that's required to save itself -- namely, eliminate DRM from its ebooks and therefore destroy the Monopsony that Amazon will otherwise cement.

"When I wrote last week that I didn't think it particularly mattered whether or not Amazon became an e-book monopoly, because straight text is the most platform-independent kind of content in existence, I forgot that we still live in a world in which books purchased from Amazon can only be read on Amazon's devices and apps.  

"It's abundantly clear that publishers that survive in an Amazon world will be those who disrupt Amazon itself. If Amazon's aim is to "cut out the middleman" then the next logical step is for publishers to cut out the middleman that is Amazon.

"Stross lays it out in stark terms: And so [publishers] will deep-six their existing commitment to DRM and use the terms of the DoJ-imposed settlement to wiggle out of the most-favoured-nation terms imposed by Amazon, in order to sell their wares as widely as possible.  

"If they don't, they're doomed.  

"There is one other outcome that is possible, and unfortunately for existing publishers, I think it's the most likely: New publishing companies will spring up that are willing to publish books sans DRM. This will lead to (some) piracy but will also return to these companies the power to price their wares as they see fit. These companies will also, incidentally, not be saddled by the legacy costs of existing publishers. And in this way companies that have existed for centuries will be radically transformed -- or else cease to exist" (http://www.technologyreview.com/blog/mimssbits/27769/?nlid=nldly&nld=2012-04-18, accessed 04-18-2012).

View Map + Bookmark Entry

Massive Thefts from the Girolamini Library in Naples; Auction Aborted April 19, 2012

On April 19, 2012 La Biblioteca de Girolamini (Biblioteca statale oratoriana del monumento nazionale dei Girolamini), the oldest library in Naples, and an Italian national treasure opened to the public in 1586, was impounded by the Italian police because of mismanagement and thefts.

"Girolamini Library’s Disappearing Books

"Two thousand intellectuals protest at director, a self-styled prince with no degree

"Would you entrust the contents of one of Italy’s – and the world’s – richest libraries to a self-styled “prince doctor” who is neither a prince nor a graduate? Yet that’s just what has happened. The “nobleman” in question is in charge, with ministerial approval, of Naples’ historic Girolamini library, where Giambattista Vico once ruminated. And when hundreds of academics raised the alarm in the press, said nobleman rushed to report the theft of a shedload of books.  

"It all started a couple of weeks ago. Florence-born Tomaso Montanari, who teaches history of modern art at Naples’ Federico II university and wrote a book called A che serve Michelangelo? [What’s the Point of Michelangelo?] advancing serious doubts on the attribution to the Renaissance genius of a crucifix purchased by the Berlusconi government for more than €3 million, wrote a piece for Il Fatto newspaper. Montanari said he had visited the Girolamini library, which holds over 150,000 ancient manuscripts and books, and found an appalling dust-layered mess with invaluable tomes lying on the floor and empty Coca-Cola cans on the ancient reading desks. Professor Montanari wrote: “The library is closed today because it has to be reorganised, says Fr Sandro Marsano, the enthusiastic, exquisitely polite Oratorian priest who welcomes visitors to the stupendous 17th-century complex. No, it’s closed because of the strange goings-on, say people who live nearby and mutter about heavily laden vehicles leaving the library courtyards late at night”.  

"The piece was a headline-grabber, not least because Montanari listed the question marks hanging over the new director, “Professor” Marino Massimo De Caro: “Whatever the case, it’s beyond belief that one of Italy’s great cultural shrines should be entrusted to a denizen of the ‘undergrowth’ described by Ferruccio Sansa and Claudio Gatti in their recently published book. De Caro is the middle man in the Venezuelan oil affair, ‘one of the most spectacular instances of convergence between Berlusconi supporters and D’Alema’s group’”. De Caro is also honorary consul for Congo, former assistant of Senator Carlo Corbinelli, former head of PR in north-eastern Italy for the public-sector pension fund INPDAP, executive vice-president from 2007 to 2010 of wind farm and solar energy firm Avelar Energia, owned by Russian oligarch Victor Vekselberg, former owner of an antiquarian bookshop in Verona, and former partner in the Buenos Aires antiquarian bookshop Imago Mundi owned by Daniel Guido Pastore, himself involved in Spain in inquiries into the theft of books from the national library in Madrid and the Zaragoza library.  

"De Caro entered ministry circles thanks to Giancarlo Galan, as a note from the ministry reveals: “Dr. Marino Massimo De Caro was invited to collaborate with the ministry by Minister Giancarlo Galan on 15 April 2011 as an expert consultant on issues concerning relations with the business system in the arts and publishing sectors, and on topics relating to the implementation of regulations concerning authorisation to build and operate facilities for the production of energy from renewable sources, and their appropriate insertion into the landscape. On 15 December 2011, Minister Lorenzo Ornaghi confirmed Dr. Marino Massimo De Caro’s appointment, along with those of other advisers to Minister Galan, as an expert consultant on issues concerning relations with the business system in the arts and publishing sectors”.  

"Here is a passage from Gatti and Sansa’s book Il sottobosco [The Undergrowth] referring to a phone tap: “On 27 December 2007, De Caro complained about a Carabinieri captain from the artistic heritage unit in Monza who was ‘bothering’ him about a book purchased at a public auction in Switzerland”. He is under investigation for handling stolen goods, he says, and this has hampered his appointment as honorary consul of Congo since the foreign ministry will not grant approval. (...) On 17 July 2009, De Caro was finally able to relax when Milan deputy public prosecutor Maria Letizia Mannella ‘established that the incunabulum has not been physically recovered, despite repeated searches’, and found there was no case to answer. In other words, since the allegedly stolen goods could not be traced and the three individuals involved were accusing each other, the prosecutor decided no further action need be taken”. No further action. But among all the candidates, were there none with an unblemished record to direct a library whose ancient books had already been ransacked in past decades?

"The day after Montanari’s protest, De Caro explained to the Corriere del Mezzogiorno that he his CV was kosher: “I graduated from Siena and I taught history and technology of publishing on the master’s course at the University of Verona”. He added: “I consulted for Cardinal Mejia, the Vatican librarian, I published a book on Galileo and I was director of the library at Orvieto cathedral”. De Caro went on to explain to Il Mattino newspaper: “My grandfather’s godfather was Benedetto Croce. My family, which passed down the title of Princes of Lampedusa, merged with the famous Tomasis thus becoming di Lampedusa, something we are proud of”.  

“Goodness gracious me!” might have been the reaction of comedian Totò, who himself claimed the title His Imperial Highness Antonio Porfirogenito, descended from Costantinople’s Focas dynasty, Angelo Flavio Ducas Comneno of Byzantium, prince of Cilicia, Macedonia, Dardania, Thessaly, Pontus, Moldava, Illyria and the Peloponnese, Duke of Cyprus and Epirus, Count and Duke of Drivasto and Durazzo. “Not true” came the reply the next day, again in Il Mattino, from the real Prince Gioacchino Lanza Tomasi: “The librarian’s assertions about his descent from the princes of Lampedusa are fabrications. The title of prince of Lampedusa was granted by Charles II of Spain to Ferdinando Tomasi in 1667. The Caros therefore have no claim whatsoever to the title of prince of Lampedusa. ... Our egregious librarian should have all this at his fingertips. And I would advise the prior of the Girolamini to keep a close eye on an archivist who prefers a shared surname to supporting documentation”.  

"OK, then, but he’s still a professor. That’s what it says in a press release from Il Buongoverno, a national association established in Milan and “chaired by Senator Riccardo Villari, with Marcello Dell’Utri as honorary national chair. The secretary is Senator Salvatore Piscitelli. (...) National organising secretary is Professor Marino Masimo De Caro”. Goodness gracious me again! It’s a pity that even though official ministerial notes and statements repeatedly refer to him as “doctor”, De Caro never actually graduated from the University of Siena, where he enrolled as a law student in 1992-93 and remained a student until 2002. Nor does the computer at the University of Verona have the least record of our hero’s having taught there. //But the funniest part of the story comes last. Even before all the tweaks were applied to his self-celebratory CV, hundreds of intellectuals were signing an appeal to the minister Lorenzo Ornaghi to ask him how a library as important as the Girolamini could be entrusted to “a man bereft of even the minimum academic qualifications or professional competence to honour the role”. By yesterday evening, this devastating denunciation had attracted just under two thousand signatures, including those of Marcello De Cecco, Ennio Di Nolfo, Dario Fo, Franca Rame, Carlo Ginzburg, Salvatore Settis, Tullio Gregory, Gustavo Zagrebelsky, Gioacchino Lanza Tomasi, Adriano La Regina, Gian Giacomo Migone, Alessandra Mottola Molfino (president of Italia Nostra), Lamberto Maffei (president of the Accademia dei Lincei), Dacia Maraini, Stefano Parise (president of the Italian library association), Stefano Rodotà and Rosario Villari among others.  

"Well, on the very morning when these intellectuals were making their reservations public, “Doctor” “Prince” “Professor” Marino Massimo De Caro turned up at the public prosecutor’s office to present formal notification of a crime. He had just realised that one thousand five hundred books were missing from the library" (http://www.corriere.it/International/english/artic(oli/2012/04/17/girolamini.shtml)

On May 9, 2012 the book auction house Zisska & Schauer in Munich, Germany, published the following statement on their website concerning their auction to be held that day: 

"Zisska & Schauer regrets to announce that the following lots registered under ownership numbers 4 and 132 of the present Auction Sale No. 59 have been withdrawn until recently expressed ownership concerns can be satisfactorily resolved: 79, 80, 81, 82, 83, 84, 85, 88, 89, 90, 93, 94, 95, 96, 97, 98, 99, 100, 101, 103, 105, 106, 107, 108, 110, 111, 112, 115, 116, 118, 119, 120, 121, 121, 122, 123, 124, 125, 126, 128, 129, 130, 131, 132, 133, 137, 138, 139, 140, 141, 143, 144, 145, 147, 149, 151, 156, 157, 164, 175, 176, 178, 179, 180, 184, 185, 189, 194, 195, 196, 198, 202, 207, 210, 212, 213, 216, 217, 218, 221, 222, 224, 225, 226, 227, 232, 235, 237, 243, 244, 245, 246, 247, 249, 250, 251, 253, 256, 258, 261, 264, 265, 266, 270, 271, 277, 282, 283, 289, 297, 307, 308, 309, 310, 311, 316, 317, 320, 322, 325, 328, 329, 333, 336, 340, 341, 342, 346, 348, 349, 350, 351, 352, 353, 354, 357, 355, 356, 358, 363, 364, 366, 367, 374, 380, 382, 383, 384, 388, 393, 400, 402, 404, 407, 409, 414, 415, 416, 419, 420, 421, 422, 423, 424, 425, 428, 429, 433, 442, 443, 444, 447, 448, 449, 450, 451, 452, 453, 456, 459, 460, 462, 466, 467, 470, 471, 473, 476, 477, 479, 480, 489, 506, 507, 508, 509, 512, 513, 514, 515, 518, 525, 529, 530, 532, 533, 534, 536, 537, 539, 541, 546, 547, 548, 549, 551a, 552, 553, 556, 558, 559, 560, 561, 563, 564, 565, 566, 567, 568, 571, 572, 573, 577, 579, 580, 582, 588, 589, 591, 598, 599, 600, 601, 605, 607, 608, 619, 620, 627, 630, 636, 643, 657, 659, 660, 661, 662, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 678, 679, 680, 681, 682, 684, 685, 686, 687, 688, 689, 693, 695, 696, 697, 699, 700, 701, 702, 703, 704, 707, 709, 710, 712, 713, 715, 716, 717, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 730, 735, 736, 738, 741, 754, 757, 763, 794, 795, 801, 815, 816, 840, 857, 858, 860, 864, 878, 891, 896, 906, 911, 918, 919, 920, 925, 926, 927, 950, 955, 957, 959, 960, 974, 975, 976, 977, 985, 988, 989, 994, 998, 999, 1040, 1753, 1973, 1980, 2001, 2020, 2038, 2049, 2051, 2055, 2063, 2065, 2068, 2069, 2070, 2076, 2081, 2088, 2098, 2099, 2101, 2103, 2105, 2108, 2118, 2120, 2121, 2124, 2135, 2248, 2255, 2304, 2306, 2312, 2320, 2324, 2370, 2373, 2376, 2378, 2379, 2384, 2386, 2390, 2398, 2401, 2575, 2586, 2589, 2591, 2593, 2594, 2595, 2597, 2603, 2642, 2663, 2666, 2676, 2682, 2686, 2688, 2704, 2707, 2709, 2711, 2713, 2719, 2721, 2723, 2735, 2748, 2775, 2776, 2780, 2782, 2787, 2796, 2797, 2804, 2818, 2820, 2831, 2846, 2847, 2850, 2854, 2855, 2856, 2860, 2861, 2863, 2864, 2866, 2867, 2869, 2870, 2880, 2887, 2888, 2892, 2893, 2897, 2898, 2899, 2900, 2902, 2904, 2914, 2919, 2921, 2944, 2945, 2947, 2950, 2952, 2956, 2957, 2958, 2960, 2963, 2965, 2968, 2969, 2970, 2974, 2977, 2981, 2987, 2988, 2989, 2994, 2999, 3002, 3003, 3032 and 3053."

Provenance information had been removed from roughly 500 books in this auction, clumsily and in haste, to the point of defacing some of the volumes; it was believed that they had been stolen from the Girolamini Library in Naples.

View Map + Bookmark Entry

Using a Densitometer to Measure Usage of Medieval Books of Hours April 23, 2012

On April 23, 2012 the website of the University of St. Andrews in Scotland published an article entitled Dirty books reval secret lives of people living in mediaeval times. This article described a technique invented by Kathryn Rudy, lecturer in the School of Art History at St. Andrews, of using a densitometer to measure the dirt levels on pages of medieval books of hours, showing which pages were most read, leaving dirty residue. 

"Dr Rudy’s new technique with the machine, used on mediaeval prayer books, has shown people were as self-interested, and afraid of illness as today.  

"The ground-breaking research has even managed to pinpoint the moment that people fell asleep reading the same book.  

"For example one of the dirtiest pages in a selection of European religious books was a prayer to St Sebastian who was often prayed to because his arrow-wounds (the cause of his martyrdom) looked like the bubonic plague.

"This shows us that the reader of the book was terrified of the plague and repeated the prayer to ward off the disease.  

"Similarly pages which contained the prayers for the salvation of others were less dirty than those asking for salvation for oneself.  

"As well as demonstrating mediaeval people prayed for their own assistance, the analysis showed the pages of a prayer to be said in the small hours of the morning were only dirty for the first few pages.  

"Dr Rudy extrapolates that it shows most readers fell asleep at the same point.  

"She said: 'Although it is often difficult to study the habits, private rituals and emotional states of people, this new technique can let us into the minds of people from the past.  

“ 'Religion was inseparable from physical health, time management, and interpersonal relationships in mediaeval times. In the century before printing, people ordered tens of thousands of prayer books—sometimes quite beautifully illuminated ones—even thought they might cost as much as a house.  

“ 'As a result they were treasured, read several times a day at key prayer times, and through analysing how dirty the pages are we can identify the priorities and beliefs of their owners' " (http://www.st-andrews.ac.uk/news/archive/2012/Title,85210,en.html, accessed 06-23-2012).

View Map + Bookmark Entry

During Testimony over a Phone Hacking Scandal Rupert Murdoch Predicts the End of Print News Media April 26, 2012

"After a day of testimony at a British judicial inquiry over his ties, friendships and disputes with British politicians, Rupert Murdoch returned to the witness stand on Thursday, saying he apologized for failing to take measures to avert the hacking scandal that has convulsed his media outpost here [in Britain]."

"At times contrite and on a occasionally somewhat testy, Mr. Murdoch became more ruminative and discursive, when he was allowed to dwell at some length on the future of the printed word, pondering not only the destiny of his own newspapers but, as if addressing a seminar rather than an inquiry, also ranging over the broader issue of the future of the press in the digital era.

" 'The day would come, he said, when the news business would be 'purely electronic' in five, 10 or 20 years" (http://www.nytimes.com/2012/04/27/world/europe/rupert-murdoch-testimony-leveson-inquiry-day-2.html?hp, accessed 04-26-2012).

View Map + Bookmark Entry

Microsoft Invests in Barnes & Noble's Nook eBook Reader Division April 30, 2012

On April 30, 2012 Microsoft announced  that it would invest $300 million in Barnes & Noble’s Nook division for a 17.6 percent stake. The deal valued Barnes & Noble's eBook reader business at $1.7 billion.  Notably that is nearly double what Barnes & Noble’s entire market capitalization was on Friday, April 27, and more than what Barnes & Noble was valued at any time since mid-2008.

Barnes & Noble, the largest bookselling chain in the United States, wagered heavily on the Nook, competing against Amazon’s Kindle and Apple's iPad.

"The Nook division’s growth has come at enormous financial cost, weighing down on Barnes & Noble’s bottom line and prompting the strategic review. The retailer added on Monday that it was still weighing other options for the business.

"Through the deal, the two companies will settle their patent disputes, and Barnes & Noble will produce a Nook e-reading application for the forthcoming Windows 8 operating system, which will run on traditional computers and tablets.  

"The new division, which has yet to be renamed, will also include Barnes & Noble’s college business. It is meant to help the business compete in what many expect to be a growth area for e-books: the education market, something that Apple has already set its sights on.  

" The new company and 'our relationship with Microsoft are important parts of our strategy to capitalize on the rapid growth of the Nook business, and to solidify our position as a leader in the exploding market for digital content in the consumer and education segments,' William J. Lynch Jr., Barnes & Noble’s chief executive, said in a statement (http://dealbook.nytimes.com/2012/04/30/microsoft-to-take-stake-in-barnes-nobles-nook-unit/?hp, accessed 04-30-2012).

View Map + Bookmark Entry

What Makes Spoken Lines in Movies Memorable? April 30, 2012

Sentences that endure in the public mind are evolutionary success stories, comparing “the fitness of language and the fitness of organisms.” On April 30, 2012 Cristian Danescu-Niculescu-Mizil, Justin Cheng, Jon Kleinberg, and Lillian Lee of the Department of Computer Science at Cornell University published "You had me at hello: How phrasing affects memorability," arXiv: 1203.6360v2 [cs.CL] 30 Apr 2012, (accessed 01-27-2013). Using the "memorable quotes" selected from the Internet Movie Database or IMDb, and the number of times that a particular movie line appeared on the Internet, they compared the memorable lines to the complete scripts of the movies in which they appeared—about 1,000 movies

"To train their statistical algorithms on common sentence structure, word order and most widely used words, they fed their computers a huge archive of articles from news wires. The memorable lines consisted of surprising words embedded in sentences of ordinary structure. 'We can think of memorable quotes as consisting of unusual word choices built on a scaffolding of common part-of-speech patterns,' their study said.  

Consider the line 'You had me at hello,' from the movie 'Jerry McGuire.' It is, Mr. Kleinberg notes, basically the same sequence of parts of speech as the quotidian 'I met him in Boston.' Or consider this line from 'Apocalypse Now': 'I love the smell of napalm in the morning.'Only one word separates that utterance from this: 'I love the smell of coffee in the morning.'

"This kind of analysis can be used for all kinds of communications, including advertising. Indeed, Mr. Kleinberg’s group also looked at ad slogans. Statistically, the ones most similar to memorable movie quotes included 'Quality never goes out of style,' for Levi’s jeans, and 'Come to Marlboro Country,' for Marlboro cigarettes.  

"But the algorithmic methods aren’t a foolproof guide to real-world success. One ad slogan that didn’t fit well within the statistical parameters for memorable lines was the Energizer batteries catchphrase, 'It keeps going and going and going.'

"Quantitative tools in the humanities and the social sciences, as in other fields, are most powerful when they are controlled by an intelligent human. Experts with deep knowledge of a subject are needed to ask the right questions and to recognize the shortcomings of statistical models.  

“ 'You’ll always need both,' says Mr. [Matthew] Jockers, the literary quant. 'But we’re at a moment now when there is much greater acceptance of these methods than in the past. There will come a time when this kind of analysis is just part of the tool kit in the humanities, as in every other discipline' " (http://www.nytimes.com/2013/01/27/technology/literary-history-seen-through-big-datas-lens.html?pagewanted=2&_r=0&nl=todaysheadlines&emc=edit_th_20130127, accessed 01-27-2013).

View Map + Bookmark Entry

Digitizing the Oldest Monastic Library & The Oldest Continuously Operating Library May 2012 – March 31, 2016

"St. Catherine’s Monastery is going digital. The monastery that claims to be the oldest in the world ­— not destroyed, not abandoned in 17 centuries — has begun digitizing its ancient manuscripts for the use of scholars. A new library to facilitate the process is about five years away.  

"The librarian, Father Justin, says the monastery’s library will grow an internet database of first-millennium manuscripts, which up until now have been kept under lock and key. Should a scholar want a manuscript, they need only email Father Justin.  

“ 'And if I don’t have book but see a reference, I can email a friend in Oxford. They can scan and send it the next day,' he says.  

"Still, as natural and inevitable as it sounds, that’s quite the sea change. Just 10 years ago, bad phone lines made it hard to connect a call with the monastery. One hundred years ago, it took 10 days to travel from Suez with a caravan of camels. And when I arrive unheralded, having not even called ahead, a monk shades his eyes, shakes head and — at first — says he will not introduce me to Father Justin.  

“ 'What if we said 'yes' to every reporter and scholar that came here? Everyone wants our time. But what about our own work?' he asks.  

"Not many of the 25 monks cloistered at the Sacred and Imperial Monastery of the God-Trodden Mount of Sinai have email addresses, or operate Mac G5 computers, or know their megapixel from their leviathan. Father Justin Sinaites is a native of Texas. He wears a black habit and a beard to his chest, and ties his long hair back in a ponytail. He is over six feet tall. When he stands, he keeps his arms ramrod straight at his sides.

"Every morning he attends the 4:30 am service — which has not changed its liturgy since AD 550 — and then climbs six flights of stairs to his office in the east wing of the three-story administrative building forming the back wall of St. Catherine’s Monastery. He powers up the G5 and passes the morning making digital photographs of scripture written on papyrus, written on animal hide and written with ink made from oak tree galls.  

“ 'It’s amazing, the juxtaposition,' is how he puts it.  

"A page that may have taken a bent-backed monk weeks to illuminate is clamped under the bellows of the 48MP CCD camera. Snap. Next page. It takes three or four days to do a whole book. There are about 3,300 manuscripts. . . . "(http://www.egyptindependent.com/news/st-catherine-monastery-seeks-permanence-through-technology, accessed 05-29-2012

By Kathy Brown on Mar 31, 2016

"The Ahmanson Foundation has awarded a major grant to the UCLA Library to fund key aspects of the Sinai Library Digitization Project.  This major project – initiated by the fathers of St. Catherine’s Monastery of the Sinai, Egypt, and made possible through the participation of the UCLA Library and the Early Manuscripts Electronic Library (EMEL) – will create digital copies of some 1,100 rare and unique Syriac and Arabic manuscripts dating from the fourth to the seventeenth centuries.

"A UNESCO World Heritage site located in a region of the Sinai Peninsula sacred to three world religions – Christianity, Islam, and Judaism - St. Catherine’s Monastery houses a collection of ancient and medieval manuscripts second only to that of the Vatican Library. Access to these remarkable materials has often been difficult, and now all the more so due to security concerns in the Sinai Peninsula.

“The manuscripts at St. Catherine’s are critical to our understanding of the history of the Middle East, and every effort must be made to digitally preserve them in this time of volatility. The Ahmanson Foundation’s visionary support honors the careful stewardship of St. Catherine’s Monastery over the centuries and ensures that these invaluable documents are not only accessible, but preserved in digital copies,” said UCLA University Librarian Ginny Steel.

“We are deeply grateful to The Ahmanson Foundation for its generous investment in this important project, and for its longstanding partnership with the UCLA Library,” Steel concluded.

“St. Catherine’s Monastery proposed a program to digitize its unparalleled manuscript collection, and an international team was assembled to help digitally preserve the ancient pages,” said Michael Phelps, EMEL director.  “EMEL is collaborating with the monastery to install world-class digitization systems, and the UCLA Library will host the images online on behalf of the monastery. The three-year project will digitize the monastery’s extensive collection of Syriac and Arabic manuscripts.” 

"Built in the sixth century, St. Catherine’s Monastery holds the oldest continually operating library in the world. The library’s manuscripts cover subjects ranging from history and philosophy to medicine and spirituality, making them of interest to scholars and learners across a wide range of disciplines. Among the monastery’s most important Syriac and Arabic manuscripts are a fifth century copy of the Gospels in Syriac; a Syriac copy of the Lives of Women Saints dated 779 CE; the Syriac version of the Apology of Aristides, of which the Greek original has been lost; and numerous Arabic manuscripts from the ninth and tenth centuries, when Middle Eastern Christians first began to use Arabic as a literary language"

View Map + Bookmark Entry

Harvard & M.I.T. to Offer Free Online Courses May 2, 2012

On May 2, 2012 Harvard and the Massachusetts Institute of Technology announced a new nonprofit partnership, known as edX, to offer free online courses from both universities.

"Harvard’s involvement follows M.I.T.’s announcement in December that it was starting an open online learning project, MITx. Its first course, Circuits and Electronics, began in March, enrolling about 120,000 students, some 10,000 of whom made it through the recent midterm exam. Those who complete the course will get a certificate of mastery and a grade, but no official credit. Similarly, edX courses will offer a certificate but not credit.

"But Harvard and M.I.T. have a rival — they are not the only elite universities planning to offer free massively open online courses, or MOOCs, as they are known. This month, Stanford, Princeton, the University of Pennsylvania and the University of Michigan announced their partnership with a new commercial company, Coursera, with $16 million in venture capital.

"Meanwhile, Sebastian Thrun, the Stanford professor who made headlines last fall when 160,000 students signed up for his Artificial Intelligence course, has attracted more than 200,000 students to the six courses offered at his new company, Udacity.

"The technology for online education, with video lesson segments, embedded quizzes, immediate feedback and student-paced learning, is evolving so quickly that those in the new ventures say the offerings are still experimental.

“ 'My guess is that what we end up doing five years from now will look very different from what we do now,' said Provost Alan M. Garber of Harvard, who will be in charge of the university’s involvement" (http://www.nytimes.com/2012/05/03/education/harvard-and-mit-team-up-to-offer-free-online-courses.html?_r=1, accessed 05-04-2012).

View Map + Bookmark Entry

The First Annual Report Issued by a Museum in an eBook Format May 7, 2012

On May 7, 2012 the San Francisco Museum of Modern Art (SFMOMA) issued its report for its 2011 fiscal year as an iPad app.  Story of a Year was the first annual report issued by a museum in an eBook format.

"Covering the period from July 1, 2010, through June 30, 2011, Story of a Year lets users see the big picture or explore in depth with the touch of a finger complete details on all of the museum's exhibitions, programs, and acquisitions. Unlike a traditional paper or PDF annual report, the app takes full advantage of the iPad's intuitive interface to deliver an array of content that takes viewers behind the scenes at the museum—all without leaving the platform. Throughout, links within the app and to SFMOMA's website provide easy access to even more content and context" (http://www.sfmoma.org/about/press/press_news/releases/923, accessed 05-09-2012).

View Map + Bookmark Entry

How eBooks Are Changing Fiction Writing and Publishing May 12, 2012

"For years, it was a schedule as predictable as a calendar: novelists who specialized in mysteries, thrillers and romance would write one book a year, output that was considered not only sufficient, but productive.

"But the e-book age has accelerated the metabolism of book publishing. Authors are now pulling the literary equivalent of a double shift, churning out short stories, novellas or even an extra full-length book each year.  

"They are trying to satisfy impatient readers who have become used to downloading any e-book they want at the touch of a button, and the publishers who are nudging them toward greater productivity in the belief that the more their authors’ names are out in public, the bigger stars they will become. . . .

"The push for more material comes as publishers and booksellers are desperately looking for ways to hold onto readers being lured by other forms of entertainment, much of it available nonstop and almost instantaneously. Television shows are rushed online only hours after they are originally broadcast, and some movies are offered on demand at home before they have left theaters. In this environment, publishers say, producing one a book a year, and nothing else, is just not enough.  

"At the same time, the Internet has allowed readers to enjoy a more intimate relationship with their favorite authors, whom they now expect to be accessible online via blogs, Q. and A.’s on Twitter and updates on Facebook. . . .

"Publishers say that a carefully released short story, timed six to eight weeks before a big hardcover comes out, can entice new readers who might be willing to pay 99 cents for a story but reluctant to spend $14 for a new e-book or $26 for a hardcover.  

"That can translate into higher preorder sales for the novel and even a lift in sales of older books by the author, which are easily accessible as e-book impulse purchases for consumers with Nooks or Kindles" (http://www.nytimes.com/2012/05/13/business/in-e-reader-age-of-writers-cramp-a-book-a-year-is-slacking.html, accessed 05-14-2012)

View Map + Bookmark Entry

The First Functioning Brain-Computer Interface for Quadriplegics May 16, 2012

On May 16, 2012 Leigh R. Hochberg, Daniel Bacher and team published "Reach and grasp by people with tetraplegia using a neurally controlled robotic arm," Nature 485 (17 May 2012) 372-75.  This was the first published demonstration that humans with severe brain injuries could effectively control a prosthetic arm, using tiny brain implants that transmitted neural signals to a computer.

"Paralysis following spinal cord injury, brainstem stroke, amyotrophic lateral sclerosis and other disorders can disconnect the brain from the body, eliminating the ability to perform volitional movements. A neural interface system could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices Able-bodied monkeys have used a neural interface system to control a robotic arm, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here we demonstrate the ability of two people with long-standing tetraplegia to use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm and hand over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor 5 years earlier, also used a robotic arm to drink coffee from a bottle. Although robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals" (http://www.nature.com/nature/journal/v485/n7398/full/nature11076.html#/ref

"The researchers still have many hurdles to clear before this technology becomes practical in the real world, experts said. The equipment used in the study is bulky, and the movements made with the robot are still crude. And the silicon implants generally break down over time (though the woman in the study has had hers for more than five years, and it is still effective).  

"No one has yet demonstrated an effective wireless system, nor perfected one that could bypass the robotics altogether — transmitting brain signals directly to muscles — in a way that allows for complex movements. 

"In an editorial accompanying the study, Andrew Jackson of the Institute of Neuroscience at Newcastle University wrote that economics might be the largest obstacle: 'It remains to be seen whether a neural-interface system that will be of practical use to patients with diverse clinical needs can become a commercially viable proposition' ' (http://www.nytimes.com/2012/05/17/science/bodies-inert-they-moved-a-robot-with-their-minds.html?hpw, accessed 05-17-2012)

View Map + Bookmark Entry

Google Introduces the Knowledge Graph May 16, 2012

"The Knowledge Graph enables you to search for things, people or places that Google knows about—landmarks, celebrities, cities, sports teams, buildings, geographical features, movies, celestial objects, works of art and more—and instantly get information that’s relevant to your query. This is a critical first step towards building the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.

"Google’s Knowledge Graph isn’t just rooted in public sources such as Freebase, Wikipedia and the CIA World Factbook. It’s also augmented at a much larger scale—because we’re focused on comprehensive breadth and depth. It currently contains more than 500 million objects, as well as more than 3.5 billion facts about and relationships between these different objects. And it’s tuned based on what people search for, and what we find out on the web.

"The Knowledge Graph enhances Google Search in three main ways to start:  

"1. Find the right thing Language can be ambiguous—do you mean Taj Mahal the monument, or Taj Mahal the musician? Now Google understands the difference, and can narrow your search results just to the one you mean—just click on one of the links to see that particular slice of results:

"2. Get the best summary With the Knowledge Graph, Google can better understand your query, so we can summarize relevant content around that topic, including key facts you’re likely to need for that particular thing. For example, if you’re looking for Marie Curie, you’ll see when she was born and died, but you’ll also get details on her education and scientific discoveries:

"3. Go deeper and broader Finally, the part that’s the most fun of all—the Knowledge Graph can help you make some unexpected discoveries. You might learn a new fact or new connection that prompts a whole new line of inquiry. Do you know where Matt Groening, the creator of the Simpsons (one of my all-time favorite shows), got the idea for Homer, Marge and Lisa’s names? It’s a bit of a surprise:

"We’ve always believed that the perfect search engine should understand exactly what you mean and give you back exactly what you want. And we can now sometimes help answer your next question before you’ve asked it, because the facts we show are informed by what other people have searched for. For example, the information we show for Tom Cruise answers 37 percent of next queries that people ask about him. In fact, some of the most serendipitous discoveries I’ve made using the Knowledge Graph are through the magical “People also search for” feature. One of my favorite books is The White Tiger, the debut novel by Aravind Adiga, which won the prestigious Man Booker Prize. Using the Knowledge Graph, I discovered three other books that had won the same prize and one that won the Pulitzer. I can tell you, this suggestion was spot on!"

View Map + Bookmark Entry

Flame: A Virus that Collects Information May 28, 2012

On May 28, 2012 the MAHER Center of the Iranian National Computer Emergency Response Team (CERT), Kaspersky Lab headquartered in Moscow, and Cry SyS Lab (Laboratory of Cryptography and National Security) of the Budapest University of Technology and Economics announced the discovery of Flame malware that attacked computers running the Microsoft Windows operating system.  A virus that collected information, it was arguably the most complex malware ever found.

"According to estimates by Kaspersky, Flame has infected approximately 1,000 machines, with victims including governmental organizations, educational institutions and private individuals. As of May 2012, the countries most affected are Iran, Israel, Sudan, Syria, Lebanon, Saudi Arabia, and Egypt. . . .

"According to Kaspersky, Flame has been operating in the wild since at least February 2010. CrySyS reports that the file name of the main component had been observed as early as December 2007. However, its creation date cannot be determined directly, as the creation dates for the malware's modules are falsely set to dates as early as 1994. Computer experts consider it the cause of an attack in April 2012 that caused Iranian officials to disconnect their oil terminals from the Internet. At the time the Iranian Students News Agency referred to the malware that caused the attack as "Wiper", a name given to it by the malware's creator. However, Kaspersky Lab believes that Flame may be 'a separate infection entirely' from the Wiper malware. Due to the size and complexity of the program—described as "twenty times" more complicated than Stuxnet—the Lab stated that a full analysis could require as long as ten years. On 28 May, Iran's CERT announced that it had developed a detection program and a removal tool for Flame, and had been distributing these to 'select organizations' for several weeks " (Wikipedia article on Flame (malware) accessed 05-30-2012).

View Map + Bookmark Entry

Growing Adoption of the eBook Format in the U. S. May 29, 2012

"One thing, however, is certain, and about it publishers agree: e-book sales as a percentage of overall revenue are skyrocketing. Initially such sales were a tiny proportion of overall revenue; in 2008, for instance, they were under 1 percent. No more. The head of one major publisher told me that in 2010 e-book sales accounted for 11 percent of his house’s revenue. By the end of 2011 it had more than tripled to 36 percent for the year. As John Thompson reports in the revised 2012 edition of his authoritative Merchants of Culture, in 2011 e-book sales for most publishers were “between 18 and 22 percent (possibly even higher for some houses).” Hardcover sales, the foundation of the business, continue to decline, plunging 13 percent in 2008 and suffering similar declines in the years since. According to the Pew Research Center’s most recent e-reading survey, 21 percent of American adults report reading an e-book in the past year. Soon one out of every three sales of adult trade titles will be in the form of an e-book. Readers of e-books are especially drawn to escapist and overtly commercial genres (romance, mysteries and thrillers, science fiction), and in these categories e-book sales have bulked up to as large as 60 percent. E-book sales are making inroads even with so-called literary fiction. Thompson cites Jonathan Franzen’s Freedom, published in 2010 by Farrar, Straus & Giroux, one of America’s most distinguished houses and one of several American imprints now owned by the German conglomerate Holtzbrinck. Franzen’s novel sold three-quarters of a million hardcover copies and a quarter-million e-books in the first twelve months of publication. (Franzen, by the way, detests electronic books, and is also the guy who dissed Oprah when she had the gumption to pick his earlier novel, The Corrections, for her popular book club.) Did Franzen’s e-book sales depress his hardcover sales, or did the e-book iteration introduce new readers to his work? It’s hard to know, but it’s likely a bit of both" (http://www.thenation.com/article/168125/amazon-effect, accessed 06-03-2012).

View Map + Bookmark Entry

A Large Scale Neural Network Appears to Emulate Activity in the Visual Cortex June 26, 2012

At the International Conference on Machine Learning held in Edinburgh, Scotland from June 26–July 1, 2012 researchers at Google and Stanford University reported that they developed software modeled on the way biological neurons interact with each other that taught itself to distinguish objects in ­YouTube videos. Although it was most effective recognizing cats and human faces, the system obtained 15.8% accuracy in recognizing 22,000 object categories from ImageNet, or 3,200 items in all, a 70 percent improvement over the previous best-performing software. To do so the scientists connected 16,000 computer processors to create a neural network for machine learning with more than one billion connections. Then they turned the neural network loose on the Internet to learn on its own.

Having been presented with the experimental results before the meeting, on June 25, 2012 John Markoff published an article entitled "How Many Computers to Identify a Cat? 16,000," from which I quote selections"

"Presented with 10 million digital images selected from YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats....

"The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.

"Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech....

"The [YouTube] videos were selected randomly and that in itself is an interesting comment on what interests humans in the Internet age. However, the research is also striking. That is because the software-based neural network created by the researchers appeared to closely mirror theories developed by biologists that suggest individual neurons are trained inside the brain to detect significant objects.

"Currently much commercial machine vision technology is done by having humans 'supervise' the learning process by labeling specific features. In the Google research, the machine was given no help in identifying features.

“ 'The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,' Dr. Ng said.

“ 'We never told it during the training, ‘This is a cat,’ ' said Dr. Dean, who originally helped Google design the software that lets it easily break programs into many tasks that can be computed simultaneously. 'It basically invented the concept of a cat. We probably have other ones that are side views of cats.'

"The Google brain assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images. The scientists said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain’s visual cortex.

"Neuroscientists have discussed the possibility of what they call the 'grandmother neuron,' specialized cells in the brain that fire when they are exposed repeatedly or “trained” to recognize a particular face of an individual.

“ 'You learn to identify a friend through repetition,' said Gary Bradski, a neuroscientist at Industrial Perception, in Palo Alto, Calif.

"While the scientists were struck by the parallel emergence of the cat images, as well as human faces and body parts in specific memory regions of their computer model, Dr. Ng said he was cautious about drawing parallels between his software system and biological life.

“ 'A loose and frankly awful analogy is that our numerical parameters correspond to synapses,' said Dr. Ng. He noted that one difference was that despite the immense computing capacity that the scientists used, it was still dwarfed by the number of connections found in the brain.

“ 'It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,' the researchers wrote.

"Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.

“ 'The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts,' said David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing. He said that rapid increases in computer technology would close the gap within a relatively short period of time: “The scale of modeling the full human visual cortex may be within reach before the end of the decade.”

"Google scientists said that the research project had now moved out of the Google X laboratory and was being pursued in the division that houses the company’s search business and related services. Potential applications include improvements to image search, speech recognition and machine language translation.

"Despite their success, the Google researchers remained cautious about whether they had hit upon the holy grail of machines that can teach themselves.

“ 'It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet,' said Dr. Ng.

Quoc V. Le,  Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeff DeanAndrew Y. Ng, "Building High-level Features Using Large Scale Upervised Learning," arXiv:1112.6209 [cs.LG] 12 July 2012.  

View Map + Bookmark Entry

The First Book Stored in DNA and then Read August 16, 2012

American molecular geneticist George M. Church, director of the U.S. Department of Energy Center on Bioenergy at Harvard & MIT, and director of the National Institutes of Health (NHGRI) Center of Excellence in Genomic Science at Harvard,  Yuan Gao from the Wyss Institute for Biologically Inspired Engineering, and Sriram Kosuri from the Department of Biomedical Engineering, Johns Hopkins University, encoded an entire book into the genetic molecules of DNA, the basic building blocks of life, and then accurately read back the text. Church's book, Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves, stored in a laboratory tube, contained 53,426 words, 11 illustrations and a JavaScript program, all of which totalled 5.27 megabits of data. Written with Ed Regis, it was scheduled to be published in printed and electronic editions in October 2012. Church's book was 600 times larger than the largest data set previously encoded in DNA.

"Digital data is traditionally stored as binary code: ones and zeros. Although DNA offers the ability to use four "numbers": A, C, G and T, to minimise errors Church's team decided to stick with binary encoding, with A and C both indicating zero, and G and T representing one.  

"The sequence of the artificial DNA was built up letter by letter using existing methods with the string of As, Cs, Ts and Gs coding for the letters of the book.  

"The team developed a system in which an inkjet printer embeds short fragments of that artificially synthesised DNA onto a glass chip. Each DNA fragment also contains a digital address code that denotes its location within the original file.  

"The fragments on the chip can later be "read" using standard techniques of the sort used to decipher the sequence of ancient DNA found in archeological material. A computer can then reassemble the original file in the right order using the address codes.  

"The book – an HTML draft of a volume co-authored by the team leader – was written to the DNA with images embedded to demonstrate the storage medium's versatility.  

"DNA is such a dense storage system because it is three-dimensional. Other advanced storage media, including experimental ones such as positioning individual atoms on a surface, are essentially confined to two dimensions" (http://www.guardian.co.uk/science/2012/aug/16/book-written-dna-code?INTCMP=SRCH, accessed 09-09-2012).

Church, Gao, Kosuri, "Next-Generation Digital Information Storage in DNA," Science, August 16, 2012: DOI: 10.1126/science.1226355

♦ When the physical book edition of the Church and Regis book was published by Basic Books in October 2012 I acquired a copy. On pp. 269-272 the printed book contained an unusual "afterward", apparently written by Church, entitled "Notes: On Encoding This Book into DNA."  This discussed "some of the legal, policy, biosafety, and other issues and opportunities" pertaining to the process.  The ideas discussed were so distinctive and original that I would have liked to quote it in its entirety but that would have been an infringement of copyright. The section ended with the following statement:

"For more information, and to explore the possibility of getting your own DNA copy of this book, please visit http://periodicplayground.com."  

When I visited the site on October 20, 2012 I viewed a message from networksolutions.com that the site was "under construction."

View Map + Bookmark Entry

The Book History Online Database, Previously a Free Service, Becomes an Expensive Private Research Source September 3, 2012

"As of 3 September [2012], the Koninklijke Bibliotheek has discontinued access to the data of Book History Online (BHO). At the end of 2012, Brill Publishers will make the data accessible (see http://www.brill.com/publications/online-resources/book-history-online). Brill is developing a new website through which the data of BHO combined with those of the Annual Bibliography for the History of the Printed Book and Libraries (ABHB) will be available. This will enable researchers to search for book historical data in one file of c. 80,000 titles. Moreover, Brill will take care of an update for BHO. If you are interested in participating in this service, please contact Matthew McLean. As soon as the new website is online, it will be posted on this page."

Fees for the privatized online database posted in November 2012 were:

Outright purchase: 5,200 Euros

Installment price: 280 Euros

Annual subscription price: 750 Euros

View Map + Bookmark Entry

The Human Genome is Packed with At Least 4,000,000 Gene Switches September 6, 2012

On September 6, 2012 ENCODE, the Encyclopedia Of DNA Elements, a project of The National Human Genome Research Institute (NHGRI) of the National Institutes of Health, involving 442 scientists from 32 laboratories around the world, published  six papers in the journal Nature and in 24 papers in Genome Research and Genome Biology.

Among the overall results of the project to date was the monumental conclusion that:

"The human genome is packed with at least four million gene switches that reside in bits of DNA that once were dismissed as “junk” but that turn out to play critical roles in controlling how cells, organs and other tissues behave. The discovery, considered a major medical and scientific breakthrough, has enormous implications for human health because many complex diseases appear to be caused by tiny changes in hundreds of gene switches" (http://www.nytimes.com/2012/09/06/science/far-from-junk-dna-dark-matter-proves-crucial-to-health.html?pagewanted=all, accessed 09-09-2012).

View Map + Bookmark Entry

The World's Smallest Book Requires a Scanning Electron Microscope to be Seen September 25, 2012

On September 25, 2012 designboom.com reported that Vancouver-based artist Robert Chaplin, using a focused ion beam (FIB) and a scanning electron microscope (SEM) from the nano-processing facility at Simon Fraser University, had broken the Guinness record for the world's smallest book by burning the nano-typographic text from his illustrated story Teeny Ted from Turnip Town onto a microchip thinner than a strand of hair. Chaplin traced the story and type onto a single-crystalline silicon surface where the line weight resolution equated to 42 nanometers (42 millionths of a millimeter). Measuring 70 micrometers x 100 micrometers, the microchip version of the book cannot be seen with the naked eye or with a regular microscope, requring a scanning electron microscope to be viewed.

♦ To make a fine distinction between miniatures, Robert Chaplin's creation should technically be considered the smallest reproduction of a printed book as it is not technically a codex printed on paper. In 2013 the smallest printed codex was Shiki no Kusabana (Flowers of Seasons) printed by Toppan Printing of Tokyo.

View Map + Bookmark Entry

A 3D Virtual Reality Reader for eBooks October 2012

In October 2012 the Münchener Digitalisierungs Zentrum of the Bayerische Staatsbibliothek, München (Munich Digitization Center of the Bavarian State Library in Munich) introduced the 3D-BSB Explorer, a gesture-controlled 3D Interactive Book Reader developed jointly by the center and the Fraunhofer Heinrich Hertz Institute.

"For the first time ever, magnificent over one thousand year old books are also on view in a digital 3D format at the "Magnificent Manuscripts – Treasures of Book Illumination" exhibition at the Kunsthalle of the Hypo Cultural Foundation in Munich. The Interactive 3D BookReader forms part of the exhibition which opens on Friday, 19 October 2012 at the Kunsthalle of the Hypo Cultural Foundation in Munich.  

"Allowing visitors to leaf through volumes illuminated in gold and encrusted with precious stones is something that most museums simply cannot permit. Secure in their glass cases, these exhibits seem remote and untouchable. Yet with the Interactive 3D BookReader, developed by the Fraunhofer Heinrich Hertz Institute in partnership with the Bavarian State Library, visitors can now not only view digitalized books in 3D without any need for special glasses, but browse through them, enlarge them and rotate them as well. The Interactive 3D BookReader opens up virtual access to these magnificent treasures of the art of illumination. Visitors don’t even need to touch the screen as an infrared camera captures the movements of one or more of their fingers while image processing software identifies their position in space in real-time. This is how they can move, browse, rotate and scale the exhibits shown on the screen. Even the slightest of finger movements can be translated into movements of the cursor. The monitor screen of the Interactive 3D BookReader shows the user's right and left eye two slightly offset images which combine to give an in-depth impression. The two stereo views are adapted to correspond to the viewer's actual position. This means that visitors don't need special 3D glasses to view the books in three dimensions" (http://www.hhi.fraunhofer.de/media/press/experience-magnificent-books-in-digital-3d.html, accessed 02-23-2013).

In February 2013 a video demonstration of the 3D-BSB Explorer was available on YouTube at this link: http://www.youtube.com/watch?v=LpSP2ojWtIs&feature=youtu.be

View Map + Bookmark Entry

Online Advertising is Expected to Surpass Print Advertising, But TV Advertising Dwarfs Both October 2012

According to the October 2012 IAB Internet advertising revenue report by the Internet Advertising Bureau, a New York based international organization founded in 1996:

"In the first half of the year, U.S. Internet sites collected $17 billion in ad revenue, a 14 percent increase over the same period of 2011. . . . In the second half of last year, websites had $16.8 billion in ad revenue. So even if growth were to slow in the second half, digital media this year could exceed the $35.8 billion that U.S. print magazines and newspapers garnered in ad revenue in 2011.

"In fact, the digital marketing research firm eMarketer projects 2012 Internet ad spending in excess of $37 billion, while print advertising spending is projected to fall to $34.3 billion.

"Meanwhile, television ad spending—which Nielsen reports was nearly $75 billion in 2011—continues to dwarf both" (http://www.technologyreview.com/news/429638/online-advertising-poised-to-finally-surpass/?utm_campaign=newsletters&utm_source=newsletter-daily-all&utm_medium=email&utm_content=20121017, accessed 10-22-2012).

View Map + Bookmark Entry

The First 3D Printshow Takes Place in London October 2012

In October 2012 the first 3D Printshow London occurred. 

"Blending technology, art, design and medical applications with a live show that featured music and fashion, it gave visitors a glimpse of the future, where 3D printing will be used across almost every industry. The show sold out completely, meaning that we welcomed more than 4,000 visitors across three days, from industry leaders and prominent technologists to designers, artists and families. With a packed exhibition floor, three sold out live shows, the world's largest gallery of 3D printed art and a series of seminars and workshops hosted by the biggest names in 3D printing, the show was buzzing!" (http://3dprintshow.com/london2012/, accessed 07-08-2013).

View Map + Bookmark Entry

Book Mountain + Library Quarter in Spijkenisse, The Netherlands October 4, 2012

"rotterdam-based MVRDV has just completed the 'book mountain + library quarter' centrally located in the market square of spijkenisse, the netherlands. a mountain of bookshelves is contained by a glass-enclosed structure and pyramidal roof with an impressive total surface area of 9,300 square meters. corridors and platforms bordering the form are accessed by a network of stairs to allow visitors to browse the tiers of shelves. a continuous route of 480 meters culminates at the peak's reading room and cafe with panoramic views through the transparent roof. any possible damage caused to the books by direct sunlight is offset by the expected 4 year lifespan of borrowed materials.  

"additional functions including an environmental education center, meeting rooms, auditorium, offices and retail take place on site. taking the form of a traditional dutch farm to reference the agricultural roots of the village. the encompassing district integrates 42 social housing units, parking and public spaces to form a neighborhood. the masonry exterior of adjacent structures is introduced into the interior with brick pavers for the circulation spaces

"project info:

"total budget incl. parking: 30 million EUR

"start project: 2003

"start construction: may 2009

"opening: october 2012  

"public part library: 3500 m2

"environmental education centre: 112 m2

"chess club: 140 m2 "back office library: 370 m2  

"retail: 839 m2

"commercial offices: 510 m2

"length book shelves: 3205 m total (1565 m for lending, 1640 m archive)

"amount of books: currently 70.000 and space for another 80.000

"the cover is 26 m tall and spans 33,5 m x 47 m  

"parking: garage with grey water basin and 350 spaces //client: gemeente spijkenisse

"user: openbare bibliotheek spijkenisse, milieuhuis spijkenisse, schaaksportvereniging spijkenisse

"architect: MVRDV, rotterdam, nl" (http://www.designboom.com/architecture/mvrdv-book-mountain-library-quarter-spijkenisse/, accessed 01-14-2013).

View Map + Bookmark Entry

2.5 Quintillion Bytes of Data Each Day October 23, 2012

"Today the data we have available to make predictions has grown almost unimaginably large: it represents 2.5 quintillion bytes of data each day, Mr. Silver tells us, enough zeros and ones to fill a billion books of 10 million pages each. Our ability to tease the signal from the noise has not grown nearly as fast. As a result, we have plenty of data but lack the ability to extract truth from it and to build models that accurately predict the future that data portends" ("Mining Truth From Data Babel. Nate Silver’s ‘Signal and the Noise’ Examines Predictions"  By Leonard Mlodinow, NYTimes.com 10-23-2012).

View Map + Bookmark Entry

Jason Pontin Argues that "New Technologies of Composition, Not New Media, Inspire Innovations in Literary Styles and Forms" October 24, 2012

On October 24, 2012, Jason Pontin, editorin chief and publisher of  MIT Technology Review published an essay in that journal entitled "How Authors Write: The technologies of composition not new media, inspire innovations in literary styles and forms."

From this I quote a section:

"At a time when new media are proliferating, it is tempting to imagine that authors, thinking about how their writing will appear on devices such as electronic readers, tablet computers, or smartphones, consciously or unconsciously adapt their prose to the exigencies of publishing platforms. But that’s not what actually happens. One looks in vain for many examples of stories whose style or form has been cleverly adapted to their digital destinations. Stories on e-readers look pretty much as stories have always looked. Even The Atavist, a startup in Brooklyn founded to publish multimedia long-format journalism for tablet computers, does little more than add elements like interactive maps, videos, or photographs to conventional stories. But such elements are editors’ accretions; The Atavist’s authors have not been moved, as Baker was, by the creative possibilities of a new technology. Writers are excited to experimentation not by the media in which their works are published but, rather, by the technologies they use to compose the works.

"There have been odd exceptions, of course. In Tristram Shandy, published from 1759 to 1767, Laurence Sterne employed all the techniques of contemporary printing to remind readers they held a book: there is a black page that mourns the death of a character, a squiggly line drawn by another character as he flourishes his walking stick, and on page 169, volume 3, of the first edition a leaf of paper marbling, a type of decoration that 18th-century bookbinders offered their wealthier clients. With this technology, pigments are suspended upon water or a viscous medium called “size,” creating colorful swirls, and are then transferred to a paper laid by hand upon the liquid. Each copy of the first edition of Tristram Shandy thus included unique marbling; the process was so expensive and time-consuming that later editions printed a monochrome reproduction instead. Sterne, an eccentric and tubercular Anglican priest, badgered his publisher, Dodsley, to include the page (which he called “the motley emblem of my work”) in order to suggest something about the opacity of literary meaning.

"Yet such responses to publishing technologies are rare. Besides the way Baker and Wallace used the “Insert Footnote” function in word processing software, writers have more often found inspiration in typewriting, photocopying, blogging, and, most recently, presentation software such as Microsoft PowerPoint and social media like Twitter." 

View Map + Bookmark Entry

Windows 8, With Touch Screen Features, is Released October 26, 2012

On October 26, 2012 Microsoft released the Windows 8 operating system to the general public. Development of Windows 8 started in 2009 before the release of its predecessor, Windows 7, the last iteration of Windows designed primarily for desktop computers. Windows 8 introduced very significant changes primarily focused toward mobile devices, tablets and cell phones which use touch screens, and:

"to rival other mobile operating systems like Android and iOS, taking advantage of new or emerging technologies like USB 3.0, UEFI firmware, near field communications, cloud computing and the low-power ARM architecture, new security features such as malware filtering, built-in antivirus capabilities, a new installation process optimized for digital distribution, and support for secure boot (a UEFI feature which allows operating systems to be digitally signed to prevent malware from altering the boot process), the ability to synchronize certain apps and settings between multiple devices, along with other changes and performance improvements. Windows 8 also introduces a new shell and user interface based on Microsoft's "Metro" design language, featuring a new Start screen with a grid of dynamically updating tiles to represent applications, a new app platform with an emphasis on touchscreen input, and the new Windows Store to obtain and/or purchase applications to run on the operating system" (Wikipedia article on Windows 8, accessed 12-14-2012).

On December 13, 2012 MIT's technologyreview.com published an interview with Julie Larson-Green, head of product development at Microsoft, in which Larson-Green explained why Microsoft decided it was necessary to rethink and redesign in a relatively radical manner the operating system used by 1.2 billion people:

Why was it necessary to make such broad changes in Windows 8?

"When Windows was first created 25 years ago, the assumptions about the world and what computing could do and how people were going to use it were completely different. It was at a desk, with a monitor. Before Windows 8 the goal was to launch into a window, and then you put that window away and you got another one. But with Windows 8, all the different things that you might want to do are there at a glance with the Live Tiles. Instead of having to find many little rocks to look underneath, you see a kind of dashboard of everything that’s going on and everything you care about all at once. It puts you closer to what you’re trying to get done. 

Windows 8 is clearly designed with touch in mind, and many new Windows 8 PCs have touch screens. Why is touch so important? 

"It’s a very natural way to interact. If you get a laptop with a touch screen, your brain clicks in and you just start touching what makes it faster for you. You’ll use the mouse and keyboard, but even on the regular desktop you’ll find yourself reaching up doing the things that are faster than moving the mouse and moving the mouse around. It’s not like using the mouse, which is more like puppeteering than direct manipulation. 

In the future, are all PCs going to have touch screens? 

"For cost considerations there might always be some computers without touch, but I believe that the vast majority will. We’re seeing that the computers with touch are the fastest-selling right now. I can’t imagine a computer without touch anymore. Once you’ve experienced it, it’s really hard to go back.

Did you take that approach in Windows 8 as a response to the popularity of mobile devices running iOS and Android? 

"We started planning Windows 8 in June of 2009, before we shipped Windows 7, and the iPad was only a rumor at that point. I only saw the iPad after we had this design ready to go. We were excited. A lot of things they were doing about mobile and touch were similar to what we’d been thinking. We [also] had differences. We wanted not just static icons on the desktop but Live Tiles to be a dashboard for your life; we wanted you to be able to do things in context and share across apps; we believed that multitasking is important and that people can do two things at one time. 

Can touch coexist with a keyboard and mouse interface? Some people have said it doesn’t feel right to have both the newer, touch-centric elements and the old-style desktop in Windows 8. /

"It was a very definite choice to have both environments. A finger’s never going to replace the precision of a mouse. It’s always going to be easier to type on a keyboard than it is on glass. We didn’t want you to have to make a choice. Some people have said that it’s jarring, but over time we don’t hear that. It’s just getting used to something that’s different. Nothing was homogenous to start with, when you were in the browser it looked different than when you were in Excel."

View Map + Bookmark Entry

Penguin to Merge with Random House October 29, 2012

On October 29, 2012 Bertelsmann, based in Gütersloh, Germany and Pearson, based in London, announced that they planned to combine their book publishing divisions, Random House and Penguin.  This merger, which could put as much as 25% of the American new book production in the hands of one company, was seen as the result of the growing power in the eBook market of dominant technology companies including Amazon, Apple and Google which pressured publishers to adjust their eBook strategy during a period in which traditional brick and mortar bookstores were disappearing.

"Under the agreement, Bertelsmann, which owns Random House, would control 53 percent of the merged publishers. Bertelsmann and Pearson would share executive oversight, with Markus Dohle of Random House serving as chief executive and John Makinson of Penguin becoming the chairman.  

"The deal would consolidate Random House’s position as the largest consumer book publisher in the English-language world, giving the combined companies greater scale to deal with the challenges arising from the growth of e-books and the rise of Internet retailers like Amazon.  

“ 'Together, the two publishers will be able to share a large part of their costs, to invest more for their author and reader constituencies and to be more adventurous in trying new models in this exciting, fast-moving world of digital books and digital readers,' said Marjorie Scardino, chief executive of Pearson, which is based in London.  

"By taking control of the company, Bertelsmann . . . hopes to avoid the problems that plagued a 50-50 partnership with Sony of Japan, in which the two companies combined their music recording divisions. The venture, Sony BMG, was riven by management turmoil and differences over strategy, prompting Bertelsmann to sell its share to Sony eventually" (http://www.nytimes.com/2012/10/30/business/global/random-house-and-penguin-to-be-combined.html, accessed 10-29-2012).

View Map + Bookmark Entry

$2.6 Billion Spent on Ads on Phones and Tablets in 2012 October 29, 2012

In a New York Times article published on October 29, 2012 Claire Cain Miller estimated that advertisers would spend $2.6 billion on ads on phones and tablets in 2012— less than 2 percent of the amount they would spend over all, but more than triple what they spent in 2010.

"Google earns 56 percent of all mobile ad dollars and 96 percent of mobile search ad dollars, according to eMarketer. The company said it is on track to earn $8 billion in the coming year from mobile sales, which includes ads as well as apps, music and movies it sells in its Google Play store. But the vast majority of that money comes from ads, it said."

View Map + Bookmark Entry

A Max Planck Institute Program for Historicizing Big Data November 2012

Max Planck Institute for the History of Science, Berlin

"Working Group: Historicizing Big Data  

"Elena Aronova, Christine von Oertzen, David Sepkoski  

"Since the late 20th century, huge databases have become a ubiquitous feature of science, and Big Data has become a buzzword for describing an ostensibly new and distinctive mode of knowledge production. Some observers have even suggested that Big Data has introduced a new epistemology of science: one in which data-gathering and knowledge production phases are more explicitly separate than they have been in the past. It is vitally important not only to reconstruct a history of “data” in the longue durée (extending from the early modern period to the present), but also to critically examine historical claims about the distinctiveness of modern data practices and epistemologies.  

"The central themes of this working group—the epistemology, practice, material culture, and political economy of data—are understood as overlapping, interrelated categories. Together they form the basic, necessary components for historicizing the emergence of modern data-driven science, but they are not meant to be explored in isolation. We take for granted, for example, that a history of data depends on an understanding of the material culture—the tools and technologies used to collect, store, and analyze data—that makes data-driven science possible. More than that, data is immanent to the practices and technologies that support it: not only are epistemologies of data embodied in tools and machines, but in a concrete sense data itself cannot exist apart from them. This precise relationship between technologies, practices, and epistemologies is complex. Big Data is often, for example, associated with the era of computer databases, but this association potentially overlooks important continuities with data practices stretching back to the 18th century and earlier. The very notion of size—of 'bigness'—is also contingent on historical factors that need to be contextualized and problematized. We are therefore interested in exploring the material cultures and practices of data in a broad historical context, including the development of information processing technologies (whether paper-based or mechanical), and also in historicizing the relationships between collections of physical objects and collections of data. Additionally, attention must be paid to visualizations and representations of data (graphs, images, printouts, etc.), both as working tools and also as means of communication.  

"In the era following the Second World War, new technologies have emerged that allow new kinds of data analysis and ever larger data production. In addition, a new cultural and political context has shaped and defined the meaning, significance, and politics of data-driven science in the Cold War and beyond. The term “Big Data” invokes the consequences of increasing economies of scale on many different levels. It ostensibly refers to the enormous amount of information collected, stored, and processed in fields as varied as genomics, climate science, paleontology, anthropology, and economics. But it also implicates a Cold War political economy, given that many of the precursors to 21st century data sciences began as national security or military projects in the Big Science era of the 1950s and 1960s. These political and cultural ramifications of data cannot be separated from the broader historical consideration of data-driven science.  

"Historicizing Big Data provides comparative breadth and historical depth to the on-going discussion of the revolutionary potential of data-intensive modes of knowledge production and the challenges the current “data deluge” poses to society." (Accessed 11-26-2012).

View Map + Bookmark Entry

A Natural History of Data November 2012

Max Planck Institute for the History of Science, Berlin 

"A Natural History of Data

"David Sepkoski

"A Natural History of Data examines the history of practices and rationalities surrounding data in the natural sciences between 1800 and the present. One feature of this transformation is the emergence of the modern digital database as the locus of scientific inquiry and practice, and the consensus that we are now living in an era of “data-driven” science. However, a major component of the project involves critically examining this development in order to historicize our modern fascination with data and databases. I do not take it for granted, for example, that digital databases are discontinuous with more traditional archival practices and technologies, nor do I assume that earlier eras of science were less “data driven” than the present. This project does seek, though, to develop a more nuanced appreciation for how data and databases have come to have such a central place in the modern scientific imagination.

"The central motivation behind this project is to historicize the development of data and database practices in the natural sciences, but it is also defined by a further set of questions, including: What is the relationship between data and the physical objects, phenomena, or experiences that they represent? How have tools and available technologies changed the epistemology and practice of data over the past 200 years? What are the consequences of the increasing economies of scale as ever more massive data collections are assembled? Have new technologies of data changed the very meaning and ontology of data itself? How have changes in scientific representations occurred in conjunction with the evolution of data practices (e.g. diagrams, graphs, photographs, atlases, compendia, etc.)? And, ultimately, is there something fundamentally new about the modern era of science in its relationship to and reliance on data and databases?" (Accessed 11-26-2012).

View Map + Bookmark Entry

Google Has 67% of the U.S. Search Market and Collects 75% of U.S. Search Ad Dollars November 4, 2012

"Regulators in the United States and Europe are conducting sweeping inquiries of Google, the dominant Internet search and advertising company. Google rose by technological innovation and business acumen; in the United States, it has 67 percent of the search market and collects 75 percent of search ad dollars. Being big is no crime, but if a powerful company uses market muscle to stifle competition, that is an antitrust violation.  

"So the government is focusing on life in Google’s world for the sprawling economic ecosystem of Web sites that depend on their ranking in search results. What is it like to live this way, in a giant’s shadow? The experience of its inhabitants is nuanced and complex, a blend of admiration and fear.  

"The relationship between Google and Web sites, publishers and advertisers often seems lopsided, if not unfair. Yet Google has also provided and nurtured a landscape of opportunity. Its ecosystem generates $80 billion a year in revenue for 1.8 million businesses, Web sites and nonprofit organizations in the United States alone, it estimates.  

"The government’s scrutiny of Google is the most exhaustive investigation of a major corporation since the pursuit of Microsoft in the late 1990s" (http://www.nytimes.com/2012/11/04/technology/google-casts-a-big-shadow-on-smaller-web-sites.html?hpw, accessed 11-04-2012).

View Map + Bookmark Entry

eBooks Accounted for 22% of All Book Spending in Second Quarter of 2012 November 5, 2012

"E-books accounted for 22% of all book spending in the second quarter of 2012, only a one percentage point gain from the first quarter of the year, but up from 14% in the comparable period in 2011, according to new figures from Bowker Market Research. In the year-to-year comparison, the hardcover and trade paperback segments both lost two percentage points each to e-books, while mass market paperbacks’ share fell from 15% in the second quarter of 2011 to 12% in this year’s second period. 

"With the fall of Borders and the growth of e-books, Amazon increased its market share of consumer book spending between the second quarter of 2011 and 2012, although its growth slowed between the first quarter of 2012 and the second period. Still, the e-tailer was easily the largest single channel for book purchases in the second quarter, with an 11 percentage-point lead over Barnes & Noble. B&N’s share of unit purchases fell by two percentage points between June 2011 and June 2012, most likely due to sluggish sales of print content through BN.com. Independent booksellers managed to hold their own in the period, maintaining a 6% share of units" (http://www.publishersweekly.com/pw/by-topic/digital/retailing/article/54609-e-books-market-share-at-22-amazon-has-27.html, accessed 11-05-2012).

View Map + Bookmark Entry

The First Teleportation from One Macroscopic Object to Another November 8, 2012

Xiao-Hui Bao and colleagues at the University of Science and Technology of China in Hefei teleported quantum information from one ensemble of atoms to another 150 meters away, a demonstration seen as a significant milestone towards quantum routers and a quantum Internet.

"Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a “quantum channel,” quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895–1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼108 rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing" (Xiao-Hui Bao Xiao-Fan Xuc, Che-Ming Lic, Zhen-Sheng Yuana, Chao-Yang Lua, and Jian-Wei Pana, "Quantum teleportation between remote atomic-ensemble quantum memories," Proc. Nat. Acad. Sci. America 10.1073/pnas.1207329109).

View Map + Bookmark Entry

The First 3D Photo Booth Prints Personal Miniature Figures November 12, 2012 – August 9, 2013

On November 12, 2012 designboom.com reported on a limited edition pop-up installation developed by the Japanese firm omote3D.com that reproduces personal detailed miniature action figures.

"ranging from 10 to 20 centimetres in height, the system utilizes a three-dimensional camera and printer to process and scan users, creating custom scale reproductions. The three-step procedure requires the user to keep still for 15 minutes while the scanners capture the data" (http://www.designboom.com/art/personal-action-figures-printed-at-a-japanese-photo-booth/, accessed 08-11-2013).

On August 9, 2013 designboom.com reported on an expansion of the concept developed and commercialized by Twinkind.com in Hamburg, Germany.

"ever imagined a true-to-life miniature version of yourself? well - now it's possible. these 3D printed portrait figurines by twinkind are made using state-of-the art 3D scanning and color printing technology. the miniatures are available to anyone who can make it to twinkind's studio in hamburg, with a 15cm tall figure costing €225 and a 35cm model coming in at €1290. several other size options are also available" (http://www.designboom.com/technology/3d-printed-portrait-figurines-by-twinkind/?utm_campaign=daily&utm_medium=e-mail&utm_source=subscribers, accessed 08-11-2013).

View Map + Bookmark Entry

Penguin Books Introduces a New eBook Lending Program November 19, 2012

On November 19, 2012 Penguin Books, one of the world's largest publishers, announced a new ebook lending program.  Because of concerns about the security of digital rights management (DRM)- technospeak for worries about copyright infringement by illegal copying- roughly a year ago, in October 2011 Penguin pulled all of its ebooks from a larger library lending program powered by Overdrive. Those ebooks were also available on Amazon Kindles, and the withdrawal of Penguin's titles from that Overdrive program may also have reflected a growing friction between the publisher and Amazon.com which was competing with publishers not only in distribution but also in the production of new titles.  

"Under the new lending program, Penguin will work with Baker & Tayler [book wholesalers and distributors to libraries] to provide its ebooks to libraries in Los Angeles and Cleveland, Ohio. The program allows library members to check out an ebook, for a limited time, six months after a book becomes available in retail stores. Libraries can only checkout one ebook per person (unless they buy multiple copies, and the library also has to purchase a new license for each ebook every year" (http://venturebeat.com/2012/11/19/penguin-rolling-out-new-ebook-library-lending-program/, accessed 11-19-2012).

View Map + Bookmark Entry

Memcomputing Outlined November 19, 2012

On November 19, 2012 physicists Massimiliano Di Ventra at the University of California, San Diego and Yuriy Pershin at the University of South Carolina, Columbia, outlined an emerging form of computation called memcomputing based on the discovery of nanoscale electronic components that simultaneously store and process information, much like the human brain.

At the heart of this new form of computing are nanodevices called the memristor, memcapacitor and meminductor, fundamental electronic components that store information while respectively operating as resistors, capacitors and inductors. These devices were predicted theoretically in the 1970s but first manufactured in 2008. Because these devices consume very little energy computers using them could approach the energy efficiency of natural systems such as the human brain for the first time.  

"In present day technology, storing and processing of information occur on physically distinct regions of space. Not only does this result in space limitations; it also translates into unwanted delays in retrieving and processing of relevant information. There is, however, a class of two-terminal passive circuit elements with memory, memristive, memcapacitive and meminductive systems – collectively called memelements – that perform both information processing and storing of the initial, intermediate and final computational data on the same physical platform. Importantly, the states of these memelements adjust to input signals and provide analog capabilities unavailable in standard circuit elements, resulting in adaptive circuitry, and providing analog massively-parallel computation. All these features are tantalizingly similar to those encountered in the biological realm, thus offering new opportunities for biologically-inspired computation. Of particular importance is the fact that these memelements emerge naturally in nanoscale systems, and are therefore a consequence and a natural by-product of the continued miniaturization of electronic devices. . . ." (Di Ventra & Pershin, "Memcomputing: a computing paradigm to store and process information on the same physical platform," http://arxiv.org/pdf/1211.4487v1.pdf, accessed 11-22-2012). 

View Map + Bookmark Entry

Coursera Enrolls Nearly Two Million Students from 196 Countries in Online Courses within its First Year November 20, 2012

On November 20, 2012 the online educational technology company Coursera, founded in Mountain View, California, by computer science professors Andrew Ng and Daphne Koller of Stanford University in April 2012, had enrolled about 1,900,000 students from at least 196 countries in at least one course. At this time Coursera was partnering with 33 universities in the United States and around the world to distribute courses over the Internet.

View Map + Bookmark Entry

The CEO of Barnes & Noble No Longer Reads Physical Books November 20, 2012

Because in 2012 Barnes & Noble was by far the largest chain of brick and mortar bookstores in the U.S. and probably also in the world, the statement by its CEO that he no longer read physical books—the mainstay of Barnes & Noble—was particularly significant in confirming the direction in which the format of the book was evolving: 

"It's not everyday that the CEO of a major American company say he doesn't use his company's most important product, at least in terms of how much money that product rakes in. Which is probably why it's worth two minutes of your time to listen to Barnes & Noble CEO William J. Lynch, Jr. tell Bloomberg News' Nicole Lapin precisely that in an interview on Friday.  

" 'I don't really read physical books anymore,' Lynch told Lapin while apparently standing in a Barnes & Noble lined with paper books. 'I like to read digitally. My wife is reading a lot of physical books.' 

"We get it: Barnes & Noble, the legacy book seller that introduced its line of NOOK ereaders in 2009, is and has been for some time a digital-first company. It knows that every year a growing portion of the public is reading books on tablets in lieu of physical copies. And for its survival, it wants those tablets to be NOOKs.  

"But it's still a curious thing for Lynch to say, since the majority of B&N's business is still selling pulp and ink. Last quarter, the company made $1.1 billion in revenue from selling books in stores and at BN.com. NOOKs didn't fare as well. The tablet, along with all accessories and content sold on it, only made B&N $192 million, comparable to the same quarter the previous year ($191 million), around the time Amazon's Kindle started winning the hearts and minds of small-tablet buyers.  

"Whether it likes it or not, brick and mortars are still Barnes & Noble's bread and butter. In August, the company attributed that success to 'the liquidation of Borders’ bookstores in fiscal 2012 and strong sales of the Fifty Shades of Grey series,' paidContent's Laura Hazard Owen noted" (http://www.huffingtonpost.com/2012/11/20/william-j-lynch-jr-barnes-noble-ceo_n_2167662.html, accessed 11-22-2012).

View Map + Bookmark Entry

"Anonymous" Plans to Shut Down Syrian Government Websites in Response to Countrywide Internet Blackout November 29 – December 1, 2012

"(Reuters) - Global hacking network Anonymous said it will shut down Syrian government websites around the world in response to a countrywide Internet blackout believed to be aimed at silencing the opposition to President Bashar al-Assad.  

"Syria was plunged into communication darkness on Thursday [November 29] when Internet connectivity stopped at midday. Land lines and mobile phones networks were also seriously disrupted.  

"The Syrian government said 'terrorists' had attacked Internet lines but the opposition and human rights groups suspect it to be the work of the authorities.  

"Opposition activists have used the Internet extensively to further their cause by publishing footage of aerial strikes and graphic images of civilian casualties. In the absence of a free press, they have used social media to disseminate information during the uprising and communicate with journalists abroad.  

"Anonymous, a loose affiliation of hacking groups that opposes Internet censorship, said it will remove from the Internet all web assets belonging to Assad's government that are outside Syria, starting with embassies.  

"By 1000 GMT on Friday, the website for Syria's embassy in Belgium was down but the embassy in China - which Anonymous said it would target first - was operating. Most government ministry websites were down although this could be due to the blackout.  

"Several networking experts said that it was highly unlikely that the lines had been sabotaged by anti-Assad forces.  

"CloudFlare, a firm that helps accelerate Internet traffic, said on its blog that saboteurs would have had to simultaneously sever three undersea cables into the port city of Tartous and also an overland cable through Turkey in order to cut off the entire country's Internet access.  

" 'That is unlikely to have happened,' CloudFlare said.  

" The government has been accused of cutting communications in previous assaults on rebel-held areas. Anonymous said Assad's government had physically 'pulled the plug out of the wall'.  

" 'As we discovered in Egypt, where the dictator (Hosni) Mubarak did something similar - this is not damage that can be easily or quickly repaired,'it added, referring to an Internet outage during the early days of the 2011 uprising in Egypt.  

" French foreign ministry spokesman Philippe Lalliot said the communications cut was of a matter of 'extreme concern'.  

" 'It is another demonstration of what the Damascus regime is doing to hold its people hostage. We call on the Damascus regime to reestablish communications without delay,' he said.  

"Rebels have seized a series of army bases across Syria this month, exposing Assad's loss of control in northern and eastern regions and on Thursday fighting on the outskirts of the capital blocked access to the international airport.  

"More than 40,000 people have been killed since the uprising began in March 2011, according to opposition groups.  

Human rights organizations, including Amnesty International, said the Internet cut could signal that Assad is seeking to hide the truth of what is happening in the country from the outside world.  

"Syrian authorities have severely restricted non-state media from working in the country.  

"The hacker collective has staged cyber attacks on the U.S. Central Intelligence Agency and Britain's Serious Organised Crime Agency. Earlier this month, The Israeli government said it logged more than 44 million hacking attempts in just a few days during its military assault on Gaza after Anonymous waged a similar campaign" (http://www.reuters.com/article/2012/11/30/us-syria-crisis-internet-idUSBRE8AT0PN20121130, accessed 11-30-2012).

♦ After two days of complete Internet blackout in Syria Cloudflare reported in its blog that Internet service partially resumed in Syria on December 1. Whether the service resumption was in response to political pressure from abroad, or threats from Anonymous, or caused by some other factor or factors was unclear.

View Map + Bookmark Entry

U.S. Bill to Stengthen Privacy Protection for Emails November 29, 2012

"WASHINGTON. The Senate Judiciary Committee on Thursday approved a bill that would strengthen privacy protection for e-mails by requiring law enforcement officials to obtain a warrant from a judge in most cases before gaining access to messages in individual accounts stored electronically.

"The bill is not expected to make it through Congress this year and will be the subject of negotiations next year with the Republican-led House. But the Senate panel’s approval was a first step toward an overhaul of a 1986 law that governs e-mail access and that is widely seen as outdated.  

"Senator Patrick Leahy, the Vermont Democrat who is chairman of the committee, was an architect of the 1986 law and is leading the effort to remake it. He said at the meeting on Thursday that e-mails stored by third parties should receive the same protection as papers stored in a filing cabinet in an individual's house.  " 'Like many Americans, I am concerned about the growing and unwelcome intrusions into our private lives in cyberspace,' Mr. Leahy said. 'I also understand that we must update our digital privacy laws to keep pace with the rapid advances in technology' " (http://www.nytimes.com/2012/11/30/technology/senate-committee-approves-stricter-privacy-for-e-mail.html?src=un&feedurl=http%3A%2F%2Fjson8.nytimes.com%2Fpages%2Fbusiness%2Findex.jsonp

View Map + Bookmark Entry

100% of U.S. Public Libraries Now Offer Public Access to the Internet December 2012

According in the Information Policy & Access Center at the University of Maryland College Park, 100% of U.S. Public Libraries now offer public access to the Internet.

View Map + Bookmark Entry

Silvia Hartmann's Novel, "Dragon Lords," Cloud-Sourced with 13,000 Collaborators December 2012

In December 2012 German born researcher, systems designer and author Silvia Hartmann issued The Dragon Lordsperhaps the first cloud-sourced novel. From her website, accessed 01-14-2015:

"In the Summer of 2012, Silvia Hartmann accepted the challenge to write a 'kick ass' fantasy-fiction novel live online on Google Drive as 'The Naked Writer,' trusting in her personal daemon to deliver not just the story, but the order and sequence of the words. Thousands of readers from around the world watched each letter appear on the screen in real time. The Dragon Lords was started on September 12th 2012 and, on November 11th 2012, the last chapter was written live in front of an audience of 150 delegates at the AMT Conference, Gatwick, England."

On December 17, 2012 Robert McCrum reviewed the work in theguardian.com, from which I quote:

"It worked like this. Hartmann's daily 90-minute composition sessions were overseen by hundreds of followers, who could put forward their ideas and influence the plot. Comments were added to the manuscript in real time, with Hartmann responding to them.

"Participants from the UK, US, Brazil, Malaysia, Russia, Australia and New Zealand took part in the project. Their input ranged from critiquing plotlines to actually naming the book. That bit of the process is probably a gimmick. At the end of the day, it's still Hartmann's novel. Indeed, one suspects that the "cloud-sourcing" element is really a new kind of global publicity under another name. I'm not sure that a serious writer, committed to self-expression, would want anything to do with this kind of collaboration. But I digress.

"Dragon Lords was completed between September and November, 2012, which is quick work, but not unseemly. Many famous novels have been written as fast as that. Faulkner famously wrote As I Lay Dying in just over a month. Georges Simenon routinely used to write a police "procedural" in a week.

"However, the making of Dragon Lords is unlike almost any previous English-language novel. More than thirteen thousand people are said to have "interacted" with the title. This is a step-change. (Many books would be grateful to have 1300 readers, let alone 13,000.)

"What's more, in this new world of creativity, all of them were hosted on Google Docs, a word processing tool that promotes and celebrates this kind of collaboration. No surprise, then, that Google is now actively puffing Dragon Lords, mostly as a new-book phenomenon. Everyone involved is being rather coy about its actual literary merit. And indeed, Alison Flood was not convinced by the work in progress. In truth, Dragon Lords is more significant as a technological, rather than a creative, feat." 

The novel was issued in both electronic and print.

View Map + Bookmark Entry

@Pontifex Sends First Tweet December 12, 2012

On December 12, 2012, using the Twitter handle @Pontifex, Pope Benedict XVI, sent his first tweet from the Vatican. By this date he already had over 800,000 followers. The pope's first tweet read:

“Dear Friends, I am pleased to get in touch with you through Twitter. Thank you for your generous response. I bless all of you from my heart.” Soon thereafter he added “How can we celebrate the Year of Faith better in our daily lives? By speaking with Jesus in prayer, listening to what he tells you in the Gospel and looking for him in those in need.”

Use of Twitter continued the church's long-standing tradition of implementing the latest technology in communication and education, beginning with the church's sponsorship of printing at the monastery of Subiaco in 1465.

View Map + Bookmark Entry

The First YouTube Video to Reach a Billion Views December 12, 2012

Screen shot from first video to hit one billion views on youtube.com

Released in July 2012, "Gangnam Style" (Korean: 강남스타일, IPA: [kaŋnam sɯtʰail]), the 18th K-pop single by the South Korean musician Psy, became the first YouTube video to reach a billion views by December 2012. When I wrote this database entry on April 22, 2013 the music video had been viewed over 1.548 billion times on YouTube. As a measure of its social significance and commercial value the Wikipedia article on the video contained nearly 500 footnotes, and the video download on YouTube was preceded by three minutes of vido advertisements (which could be skipped).

View Map + Bookmark Entry

eBook Reading Jumps; Print Book Reading Declines December 17, 2012

"The population of e-book readers is growing. In the past year, the number of those who read e-books increased from 16% of all Americans ages 16 and older to 23%. At the same time, the number of those who read printed books in the previous 12 months fell from 72% of the population ages 16 and older to 67%.  

"Overall, the number of book readers in late 2012 was 75% of the population ages 16 and older, a small and statistically insignificant decline from 78% in late 2011.  

"The move toward e-book reading coincides with an increase in ownership of electronic book reading devices. In all, the number of owners of either a tablet computer or e-book reading device such as a Kindle or Nook grew from 18% in late 2011 to 33% in late 2012. As of November 2012, some 25% of Americans ages 16 and older own tablet computers such as iPads or Kindle Fires, up from 10% who owned tablets in late 2011. And in late 2012 19% of Americans ages 16 and older own e-book reading devices such as Kindles and Nooks, compared with 10% who owned such devices at the same time last year" (Pew Internet and American Life Project, 12-27-2012).

View Map + Bookmark Entry

"How the antiquarian book market has evolved for life on the web" December 19, 2012

From Wired.co.uk December 19, 2012:

By Chris Owen.

"Digital marketplaces such as Amazon have disrupted -- some might say ruined -- the traditional publishing industry. And following a flurry of launches in the last year, e-readers look set to appear in Christmas stockings everywhere. But what does all of this mean for the trade in antiquarian books?  

"Decades ago, the antiquarian book market was dominated by specialist sellers sitting in dusty shops stacked to the ceiling with first editions, signed copies, manuscripts, and rare folios. Then came the internet. With the advent of global access to information, (and stock), came the opportunity to reach out to a broader audience, and stores were battling with the new boys in the form of Abebooks (one of the very first sites on the web), and of course Amazon, which, lest we forget, started as an online book store.  

"Looking back, in 1997, there were around a million books available on the web -- at the time a seemingly huge number, but a fraction of the 140 million estimated books available online today. Books still form a massive volume of online retail trade; research has suggested that 41 percent of people who shop online have bought a book through the web.  

"Sam Missingham, co-founder of Future Book, agrees, "Amazon's second hand market has revolutionised the way people buy second hand books. There's almost no book I can't buy now if I want a copy -- 10 years ago I people could have taken a year scouting through second hand shops and still not have found what I want. Now one five-second search on Amazon and you can have it delivered to your door."

"However, this marketplace brought with it opportunity and also threats -- according to Julian Wilson, Books Specialist at Christie's in London, 'There's never been a better time for people to buy such a wide range of rare books at low prices. A culture of price under cutting is causing prices to fall dramatically in the low to mid end market. For instance, 17th-century county maps of England are selling at about 30-40 percent of their value 20 years ago.'

"Missingham agrees to an extent, but suggests the mid-market is perhaps just 'shrinking slightly'. She adds, 'the mid-market for ebooks on Kindle store is being overloaded with self-published books of varying quality -- indeed oft described as a tsunami of shit.'

"At the top end however, the market is booming. Antiquarian literature has seen consistent growth, and the likes EEBO (Early English Books Online) is allowing collectors to compare rare items and verify their credentials, while Abebooks and the records kept by resellers and auction houses has allows them to price items effectively. Indeed, the likes of EEBO and other collectables sites are proving invaluable in the battle against forgeries, and in clearing up subjective opinion on veracity of rare books.  

"Wilson cites a recent example where he was unconvinced that a Harry Potter first edition hardback, potentially worth £10,000, was bona fide. On examining the title page closely, he discovered it was taken from a paperback and had been almost perfectly inserted into a second edition hardback, itself worth only £200.  

"Similarly he remembers a faked frontispiece in a copy of Shakespeare's First Folio, an almost legendarily rare item in the antiquarian book market. Convinced there was something amiss, he and his colleagues spent hours at the British Library comparing it to verified copies of the First Folio, as well as other online resources, and ultimately were able to put their finger on the problem: the chain lines in the paper (a distinct book "fingerprint" as it were), were different to confirmed editions.  

"Ironically, the market was also affected by the dotcom boom itself and the boom of the modern digital age -- not through the surge of online retail, but by the entrepreneurs behind the multi-million dollar sites which emerged, who drove a spike in the antiquarian book market, one previously driven by 45-65-year-old collectors which have (and still do) dominate the scene.  

"These new collectors wanted the flagship books; the likes of Darwin's The Origin of the Species. What this meant for the market was that individual books shot up in price (and have remained high) -- a first edition of Darwin's 'Origin' was worth perhaps £20,000 in 1994, but by 1999 had shot up to around £80,000 and remains around there today, nudging toward £100,000 for very fine copies.  

"However, first editions of the rest of Darwin's leading work, such as The Descent of Man remain static at around £5,000, while his other lesser known works can be found still for prices in the hundreds of pounds. This skew is true for 'Origin…' as it is for many other seminal works -- Adam Smith's The Wealth of Nations being a prime example, which Christie's sold for a record-breaking £157,250 in 2010.

"Interestingly, it is around this time that the collectables market started to witness a change in marketing strategy. Previously auction house brochures detailed the items' condition and quality; from the mid-nineties, explanations of the importance started to emerge and dominate in order that potential buyers (who had no collectables history, nor academic insight into the literary world), could understand what they were buying.  

"It's natural to think that such a drive in the top end might push the collectable books market the same way as philately, where rare stamps are now holding their value better than anything aside from gold, and indeed are proving to be a highly lucrative investment strategy. However, a statute binding all members of the Antiquarian Book Sellers Association strictly forbids this: sellers cannot position rare books as investment opportunities. It's an intriguing polarity, and one that could be affected by an additional factor, the proliferation of the e-reader.

"The democracy that cheap, easily-downloadable books brings could be seen as a threat to the sector, and indeed there has been concern that it too will drive a race to the bottom and devalue the printed page. However, Wilson thinks this could in fact bring with it a great opportunity to put value back into the publishing sector, through initial discovery of books, and the subsequent creation of ultra-limited, beautifully-made original products.  

"While this may drive a collectables market, it may not drive the sales figures (and revenue) that investors will demand. However, it does open up the debate about whether publishers should also take a longer term market responsibility as well as the shorter term financial one. 

"Missingham suggests that there are other issues to heed, namely the community aspect of any web-based marketplace, 'the main issue is how to find and discover the quality books online. Talk in the industry is of reliable gatekeepers reducing in number, the surge of dubious, untrustworthy online reviews being the main issue'.

"There's much talk of similarities between the second hand book industry and that of the humble independent record shop. While the likes of Rough Trade, itself a British icon, are prospering, the number of small private stores across the country has plummeted in the last decade -- predominantly as a result of the digital music boom, and the devaluing of music amid the clamour for "free" content. Let's hope the books trade can learn from music's mistakes."

View Map + Bookmark Entry

After Cell Phones With Cameras, Android Cameras- Without Cellphones- are Introduced December 19, 2012

Once cell phone cameras with their very limited lenses and image processors became the most popular means of taking photographs, mainly because cell phone images could immediately be emailed, posted to websites, social media, etc., it was probably inevitable that camera companies would introduce regular more full-featured cameras incorporating computers that could be connected to the Internet through Internet "hot spots" or cellular connections. The first models offered at the end of 2012 were full-featured and overpriced, but the concept appeared to have great potential: 

"New models from Nikon and Samsung are obvious graduates of the 'if you can’t beat ’em, join ’em' school. The Nikon Coolpix S800C ($300) and Samsung’s Galaxy Camera ($500 from AT&T, $550 from Verizon) are fascinating hybrids. They merge elements of the cellphone and the camera into something entirely new and — if these flawed 1.0 versions are any indication — very promising.  

"From the back, you could mistake both of these cameras for Android phones. The big black multitouch screen is filled with app icons. Yes, app icons. These cameras can run Angry Birds, Flipboard, Instapaper, Pandora, Firefox, GPS navigation programs and so on. You download and run them exactly the same way. (That’s right, a GPS function. “What’s the address, honey? I’ll plug it into my camera.”) But the real reason you’d want an Android camera is wirelessness. Now you can take a real photo with a real camera — and post it or send it online instantly. You eliminate the whole 'get home and transfer it to the computer' step.  

"And as long as your camera can get online, why stop there? These cameras also do a fine job of handling Web surfing, e-mail, YouTube videos, Facebook feeds and other online tasks. Well, as fine a job as a phone could do, anyway.  

"You can even make Skype video calls, although you won’t be able to see your conversation partner; the lens has to be pointing toward you. Both cameras get online using Wi-Fi hot spots. The Samsung model can also get online over the cellular networks, just like a phone, so you can upload almost anywhere" (Pogue's Posts, NYTimes.com, 12-19-2012, accessed 12-21-2012).  

View Map + Bookmark Entry

"The Life of Pi": Computer Graphic Animation Indistinguishable from Nature December 22, 2012

On December 22, 2012 my wife Trish and I went to see the Life of Pi, an American adventure drama film based on Yann Martell's 2001 novel directed by Ang Lee and distributed by 20th Century Fox in Los Angeles. There has, of course, been much written about this imaginative novel and the film. With respect to this database the story line, and the inspirational aspects of the novel and film are not strictly relevant. What was of particular interest to me was the revelation only after I had seen the film that most of the scenes with the tiger were done entirely by computer graphic animation. During the film every image—every scene in which the tiger appeared— appeared to be 100% real. No computer graphic animation in any previous film that I had seen had achieved this level of realism.

"Visual effects supervisor Bill Westenhofer was no stranger to animal-oriented projects when he came aboard Ang Lee's 'Life of Pi' to realize a digital photorealistic tiger. However, the film presented challenges beyond merely creating the beast. In 'Pi,' the tiger, oddly named Richard Parker, is one of the two main characters. He and Pi Patel, played by Suraj Sharma, are castaways who survive 227 days on a lifeboat in the Pacific Ocean. 

"Westenhofer began his work by bringing four real tigers to Taiwan, where the film was partly shot, in order to obtain very precise animation references with the goal of making the animal as real-looking as possible.  

"According to Westenhofer, even the most skilled animators in the world need visual references. 'A tiger is a solid mass of muscle with a loose bag of skin surrounding it, like a cloth that is draped over it,' he says. 'We really studied the tiny nuances such as the shoulder ripple that occurs when he shifts his weight. By having the reference clips, we kept true to how the animal would react.'

"After training and rehearsing with the tigers for five weeks, the production completed 23 shots of a real tiger around the lifeboat where most of the story takes place. The film's remaining 148 tiger shots would be realized with advanced computer graphics technology. In the film, the real tigers are indistinguishable from the digital ones.  

"The lead vfx shop on the tiger shots, Rhythm and Hues, spent full year on research and development, building upon its already vast knowledge of CG animation as it created the fearsome Richard Parker. 'Forty percent of our efforts were (born of) new technology,' which was used create 'the hair, the way it lights, the muscle and skin system,' Westenhofer says" (http://www.variety.com/article/VR1118063581/?refcatid=13, accessed 12-23-2012).  

View Map + Bookmark Entry

With the Decline of Brick & Mortar Bookstores Public Libraries are Becoming More Commercial December 27, 2012

“ 'A library has limited shelf space, so you almost have to think of it as a store, and stock it with the things that people want,' said Jason Kuhl, the executive director of the Arlington Heights Memorial Library. Renovations going on there now will turn a swath of the library’s first floor into an area resembling a bookshop, where patrons will be pampered with cozy seating, a vending cafe and, above all, an abundance of best sellers.  

"As librarians across the nation struggle with the task of redefining their roles and responsibilities in a digital age, many public libraries are seeing an opportunity to fill the void created by the loss of traditional bookstores. Indeed, today’s libraries are increasingly adapting their collections and services based on the demands of library patrons, whom they now call customers. Today’s libraries are reinventing themselves as vibrant town squares, showcasing the latest best sellers, lending Kindles loaded with e-books, and offering grassroots technology training centers. Faced with the need to compete for shrinking municipal finances, libraries are determined to prove they can respond as quickly to the needs of the taxpayers as the police and fire department can. “I think public libraries used to seem intimidating to many people, but today, they are becoming much more user-friendly, and are no longer these big, impersonal mausoleums,” said Jeannette Woodward, a former librarian and author of 'Creating the Customer-Driven Library: Building on the Bookstore Model.' 'Public libraries tread a fine line,' Ms. Woodward said. 'They want to make people happy, and get them in the habit of coming into the library for popular best sellers, even if some of it might be considered junk. But libraries also understand the need for providing good information, which often can only be found at the library.'

"Cheryl Hurley, the president of the Library of America, a nonprofit publisher in New York 'dedicated to preserving America’s best and most significant writing,' said the trend of libraries catering to the public’s demand for best sellers is not surprising, especially given the ravages of the recession on public budgets.  

Still, Ms. Hurley remains confident that libraries will never relinquish their responsibility to also provide patrons with the opportunity to discover literary works of merit, be it the classics, or more recent fiction from novelists like Philip Roth, whose work is both critically acclaimed and immensely popular.  

" 'The political ramifications for libraries today can result in driving the collection more and more from what the people want, rather than libraries shaping the tastes of the readers,' Ms. Hurley said. 'But one of the joys of visiting the public library is the serendipity of discovering another book, even though you were actually looking for that best seller that you thought you wanted.'  

“ 'It’s all about balancing the library’s mission and its marketing, and that is always a tricky dance,' she added.  

"While print books, both fiction and nonfiction, still make up the bulk of most library collections – e-books remain limited to less than 2 percent of many collections in part because some publishers limit their availability at libraries — building renovation plans these days rarely include expanding shelf space for print products. Instead, many libraries are culling their collections and adapting floor plans to accommodate technology training programs, as well as mini-conference rooms that offer private, quiet spaces frequently requested by self-employed consultants meeting with clients, as well as teenagers needing space to huddle over group projects" (http://www.nytimes.com/2012/12/28/us/libraries-try-to-update-the-bookstore-model.html?ref=global-home&_r=0, accessed 12-28-2012).

View Map + Bookmark Entry

The Secret Race to Save Manuscripts in Timbuktu and Djenne December 27, 2012

By GEOFFREY YORK, The Globe and Mail, Dec. 27 2012

"As rebels searched the bags of the truck passengers at a checkpoint near Timbuktu, one man was trying to hide his nervousness.

"Mohamed Diagayete, an owlish scholar with an eager smile, was silently praying that the rebels would not discover his laptop computer. Buried in his laptop bag was an external hard drive with a cache of thousands of valuable images and documents from Timbuktu’s greatest cultural treasure: its ancient scholarly manuscripts.  

"Radical Islamist rebels in northern Mali have repeatedly attacked the fabled city’s heritage, taking pickaxes to the tombs of local saints and smashing down a door in a 15th century mosque. They demolished several more mausoleums this week and vowed to destroy the rest, despite strong protests from UNESCO, the United Nations cultural agency.  

"With the tombs demolished, Timbuktu’s most priceless remaining legacy is its vast libraries of crumbling Arabic and African manuscripts, written in ornate calligraphy over the past eight centuries, proof of a historic African intellectual tradition. Some experts consider them as significant as the Dead Sea Scrolls – and an implicit rebuke to the harsh narrow views of the Islamist radicals.  

"But now the manuscripts, too, could be under threat. And so a covert operation is under way to save them.  

"That’s why Mr. Diagayete was so anxious to smuggle his hard drive out of Timbuktu. For years, he’s been helping preserve the manuscripts by digitizing them. But the project was halted when the Islamists seized Timbuktu in April. A few months later, Mr. Diagayete made an undercover visit to Timbuktu and brought back as many of the digital images as he could.  

"The quest to save the documents rarely leaves his thoughts. 'What will happen to the manuscripts?' he asks from the safety of Mali’s capital, Bamako, where he fled after the fall of Timbuktu.  

“ 'I’m always asking myself thousands of questions about the manuscripts,' he says. 'When we lose them, we have no other copy. It’s forever.'

Mr. Diagayete is a researcher at the Ahmed Baba Institute, which has been digitizing the manuscripts for nearly a decade with support from foreign governments. But because of technical delays, and the huge number of manuscripts in the city (up to 700,000 by some estimates), only a tiny fraction has been copied so far.  

"The manuscripts, dating back to the 13th century, are evidence of ancient African and Islamist written scholarship, contradicting the myth of a purely oral tradition on the continent.  

"Many of the manuscripts are religious documents, but others are intellectual treatises on medicine, astronomy, literature, mathematics, chemistry, judicial law and philosophy. Many were brought to Timbuktu in camel caravans by scholars from Cairo, Baghdad and Persia who trekked to the city when it was one of the world’s greatest centres of Islamic learning. In the Middle Ages, when Europe was stagnating, the African city had 180 religious schools and a university with 20,000 students.  

"Timbuktu fell into decline after Moroccan invasions and French colonization, but its ancient gold-lettered manuscripts were preserved by dozens of owners, mostly private citizens, who kept them in wooden trunks or in their own libraries.  

"Today, under the occupation of the radical jihadists, the manuscripts face a range of threats. Conservation experts have fled the city, so the documents could be damaged by insects, mice, sand, dust or extreme temperatures. Or the Islamist militants could decide to raise money by looting and selling the documents.  

"There’s also a risk that the militants could simply destroy the manuscripts, since some are written by African mystics or moderate Sufis, regarded by the Islamist rebels as ideological enemies. Another threat is the planned Western-backed military campaign against the rebels, which could lead to house-to-house fighting in Timbuktu, further endangering the manuscripts.  

"The government-run Ahmed Baba Institute holds nearly 40,000 manuscripts in two main buildings, including a headquarters built with South African assistance in 2009. But the Islamist rebels have seized the institute, looting its computers and using its new building as a sleeping quarters.  

“ 'It’s a big setback for the institute,' said Susana Molins Lliteras, a researcher at a South African-based project to protect the Timbuktu manuscripts.  

“ 'It’s very possible that things have been lost,' she said. 'We haven’t even had a chance to research the manuscripts – we haven’t scratched the surface. So if they are lost, we won’t even know what is lost.'

"Since the rebel takeover, the private owners have scrambled to protect the manuscripts. Nobody knows exactly what they have done, but it is believed that some owners have hidden the manuscripts, buried them in the sand, or smuggled them to villages.  

"This, too, is dangerous, since the ancient texts can easily be damaged when they are moved. 'They are very fragile,' Mr. Diagayete said. 'The choice is difficult: Either we lose them all or we lose part of them. Everyone is trying to find a way to protect their manuscripts.'

"Adama Diarra, a Malian journalist, saw three owners piling their manuscripts into 50-kilogram rice bags in April, shortly after the Islamists seized Timbuktu, apparently in an effort to move them to safer places. 'The pages were falling out,' he said.  

"Mohamed Galla Dicko, director of the Ahmed Baba Institute for 17 years before leaving the institute this year, says the threat to the manuscripts is serious. 'The old pages can be damaged just by touching them,' he said. 'And the people who are moving them are not specialists in handling them.'

"While the Timbuktu manuscripts are in trouble, there is better news from another ancient Malian town, Djenne, south of the rebel-controlled territory. With help from the British Library, researchers are digitizing thousands of Djenne’s historic manuscripts – some nearly 500 years old.  

"Even when fuel and electricity were rationed after the rebel advances, dedicated workers kept toiling on the project at Djenne’s manuscript library. 'We’ve saved a large number of the manuscripts,' said Sophie Sarin, a Swedish hotel owner in Djenne.  

"The project aims to collect 200,000 images by next July. After the rebels captured northern Mali this year, Ms. Sarin travelled to London with a hard drive containing 80,000 digital images of the Djenne manuscripts. She brought them to specialists at the British Library, who were very relieved to see them, she said."

View Map + Bookmark Entry

"Libraries Have Shifted from Warehouses of Books & Materials to Become Participatory Sites of Culture and Learning" December 28, 2012

"Contemporary libraries have shifted from warehouses of books and materials to become participatory sites of culture and learning that invite, ignite and sustain conversations.

"The media scholar Henry Jenkins has identified that such participatory sites of culture share five traits:  

"· Creating learning spaces through multiple participatory media;

"· Providing opportunities for creating and sharing original works and ideas;  

"· Crafting an environment in which novices’ and experts’ roles are fluid as people learn together;  

"· Positing the library as a place where members feel a sense of belonging, value and connectedness; and  

"· Helping people believe their contributions matter by incorporating their ideas and feedback.  

"Modern libraries of all kinds – public, school, academic and special – are using this lens of participatory culture to help their communities rethink the idea of a “library.” By putting relationships with people first, libraries can recast and expand the possibilities of what we can do for communities by embodying what Guy Kawasaki calls enchantment: trustworthiness, likability, and exceptional services and products.

"Libraries in various communities provide enchantment through traditional services, like story time, bookmobiles, classes and rich collections of books. However, libraries are also incorporating innovative new roles: librarians as instructional partners, libraries as “makerspaces,” libraries as centers of community publishing and digital learning labs.  

"While libraries face many challenges – budget cuts, an ever-shifting information landscape, stereotypes that sometimes hamper how people see libraries, and rapidly evolving technologies – our greatest resource is community participation. Relationships with the community build an organic library, that is of the people, by the people and for the people (Buffy J. Hamilton, http://www.nytimes.com/roomfordebate/2012/12/27/do-we-still-need-libraries/its-not-just-story-time-and-bookmobiles, accessed 12-29-2012). 

View Map + Bookmark Entry

The Year In Graphics and Interactives from The New York Times December 30, 2012

On December 30, 2012 The New York Times published 2012: The Year in Graphics. Graphics and interactives from a year that included an election, the Olympics and a devastating hurricane. A selection of the graphics presented here include information about how they were created.

The review covered a wide range of subjects and approaches.  One of the most unusual from my perspective was Connecting Music and Gesture, originally published on April 6, 2012:

"We wanted to visualize and explain the nuances of conducting. The N.Y.U. Movement Lab recorded Alan Gilbert, the music director of the New York Philharmonic, using motion-capture technology. With the motion-capture data, we created one visualization that tracked the lines that Mr. Gilbert’s fingers drew in the air in a way that looked similar to Picasso’s 'light drawings.' ”

View Map + Bookmark Entry

Google Converts Language Translation into a Problem of Vector Space Mathematics 2013

In 2013 Tomas Mikolov, Quoc V. Le, and Ilya Sutskever at Google in Mountain View developed a technique that automatically generated dictionaries and phrase tables that convert one language into another. Instead of relying on versions of the same document in different languages the new method used data mining techniques to model the structure of a single language, and then compared it to the structure of another language.

"The new approach is relatively straightforward. It relies on the notion that every language must describe a similar set of ideas, so the words that do this must also be similar. For example, most languages will have words for common animals such as cat, dog, cow and so on. And these words are probably used in the same way in sentences such as “a cat is an animal that is smaller than a dog.”

"The same is true of numbers. The image above shows the vector representations of the numbers one to five in English and Spanish and demonstrates how similar they are.

"This is an important clue. The new trick is to represent an entire language using the relationship between its words. The set of all the relationships, the so-called “language space”, can be thought of as a set of vectors that each point from one word to another. And in recent years, linguists have discovered that it is possible to handle these vectors mathematically. For example, the operation ‘king’ – ‘man’ + ‘woman’ results in a vector that is similar to ‘queen’.

"It turns out that different languages share many similarities in this vector space. That means the process of converting one language into another is equivalent to finding the transformation that converts one vector space into the other.

"This turns the problem of translation from one of linguistics into one of mathematics. So the problem for the Google team is to find a way of accurately mapping one vector space onto the other. For this they use a small bilingual dictionary compiled by human experts–comparing same corpus of words in two different languages gives them a ready-made linear transformation that does the trick.

"Having identified this mapping, it is then a simple matter to apply it to the bigger language spaces. Mikolov and co say it works remarkably well. 'Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish, they say (http://www.technologyreview.com/view/519581/how-google-converted-language-translation-into-a-problem-of-vector-space-mathematics/, accessed 01-14-2015).

Mikolov et al, "Exploiting Simularities among Languages for Machine Translation" (2013) http://arxiv.org/pdf/1309.4168v1.pdf, accessed 01-14-2015).

View Map + Bookmark Entry

An Infographic About Tatooing in the Form of an Elaborate Tatoo 2013

As a 2013 school project for the Academy of Fine Arts in Łódź, Polish graphic designer Paul Marcinkowski / aka Kaplon of Warsaw created an infographic about tatooing which appears to have been inked on his own body. The graphic detailed several facts about tattoos, such as popular reasons why people regretted having them, and the most common number of tattoos a person might have.

View Map + Bookmark Entry

The 2013 Contender for the World's Smallest Printed Book 2013

Possibly the worlds smallest book: Shiki no Kusabana (Flowers of Seasons).

With pages measuring 0.75 millimeters (0.03 inches), the 22-page micro-book, entitled Shiki no Kusabana (Flowers of Seasons), contains names and monochrome illustrations of Japanese flowers such as the cherry and the plum.

Toppan Printing, who have been producing micro books since 1964, used the same micro-engraving technology employed in the production of bank notes to prevent forgery to produce letters in Shiki no Kusabana just 0.01 mm. wide.

"The book is on display at Toppan's Printing Museum in Tokyo, and is on sale, together with a magnifying glass and a larger copy, for 29,400 yen (£205). Toppan said it would be applying to Guinness World Records to claim the title of world's smallest book, presently held by a 0.9 mm, 30-page Russian volume called Chameleon, created by Siberian craftsman Anatoliy Konenko in 1996." (http://www.telegraph.co.uk/culture/books/booknews/9927200/Is-this-the-worlds-smallest-book.html, accessed 05-01-2013).

View Map + Bookmark Entry

Criticism of Julian Assange and Wikileaks as Reflected in the Film, "The Fifth Estate" 2013

On January 4, 2015 I viewed the Blu-ray disc of the 2013 film The Fifth Estate directed by Bill Condon, about the news-leaking website WikiLeaks. The film stars Benedict Cumberbatch as its editor-in-chief and founder Julian Assange, and Daniel Brühl as its former and now disaffected spokesperson Daniel Domscheit-Berg. The screenplay was written by Josh Singer based in-part on Domscheit-Berg's book Inside WikiLeaks: My Time with Julian Assange and the World's Most Dangerous Website (2011), as well as WikiLeaks: Inside Julian Assange's War on Secrecy (2011) by British journalists David Leigh and Luke Harding. The title of the film, The Fifth Estate is a term used to describe the people who operate in the manner of journalists outside the normal editorial and judgmental constraints imposed on the mainstream media. The film performed poorly at the box office and garnered mixed critical reaction, receiving criticism for its screenplay and direction; however, praise was given on the acting, particularly Cumberbatch's performance. My primary reason for viewing the film was to see Cumberbatch play Assange; I became interested in his acting after viewing the first three seasons of his TV series Sherlock, and his portrayal of Alan Turing in the 2014 film The Imitation Game. Like most viewers, I considered Cumberbatch's portrayals in all three roles to be extremely successful artistically, and very entertaining. Watching the film, of course, reminded me of the controversial political role of Julian Assange, who remained, as of January 2015, for all intents and purposes under house arrest in the London Embassy of Ecuador, where he was granted asylum, in order to avoid extradition to Sweden for prosecution for rape and molestation.

Following the screenplay, Cumberbatch portrayed Assange as an egotistic, conceited, perhaps justifiably paranoid individualist who, remarkably, created Wikileaks essentially by himself, while claiming initially that he had many assistants. Daniel Domscheit-Berg, identified in the film as Daniel Berg, appears to have been Assange's first genuine, devoted helper, who became disaffected because of Assange's refusal to use normal editorial restraint and judgment in protecting the identities of people mentioned in the leaks whose lives might be compromised by publication of the information. Berg's view reflects the highly complicated role of Wikileaks as a new journalistic medium, called the "Fifth Estate" most strongly associated with bloggers, journalists, and media outlets that operate outside of the mainstream media, and often in opposition to mainstream media. This followed the concept of the "Fourth Estate", which emerged in reference to forces outside the established power structure, and is now most commonly used to refer to the independent press or media.

The reception of Wikileaks has been complex and controversial. On the one hand it has been strongly supported by advocates of freedom in communication, and in its exposure of criminality and injustice; on the other hand, its lack of editorial control and judgment in publishing vast quantities of unedited documents and communications, which sometimes places the lives innocent people mentioned in the leaks in danger, has been strongly condemned. For a summary of its reception I quote the Wikipedia article on Wikileaks as it appeared on 01-05-2015. (As usual I do not include the footnotes which can be accessed on Wikipedia's website):

"WikiLeaks has received praise as well as criticism. The organisation has won a number of awards, including The Economist's New Media Award in 2008 at the Index on Censorship Awards and Amnesty International's UK Media Award in 2009. In 2010, the New York Daily News listed WikiLeaks first among websites "that could totally change the news," and Julian Assange received the Sam Adams Award and was named the Readers' Choice for TIME's Person of the Year in 2010. The UK Information Commissioner has stated that "WikiLeaks is part of the phenomenon of the online, empowered citizen." During its first days, an Internet petition calling for the cessation of extra-judicial intimidation of WikiLeaks attracted more than six hundred thousand signatures. Sympathizers of WikiLeaks in the media and academia have commended it for exposing state and corporate secrets, increasing transparency, assisting freedom of the press, and enhancing democratic discourse while challenging powerful institutions.

"At the same time, several U.S. government officials have criticized WikiLeaks for exposing classified information and claimed that the leaks harm national security and compromise international diplomacy. Several human rights organisations requested with respect to earlier document releases that WikiLeaks adequately redact the names of civilians working with international forces, in order to prevent repercussions. Some journalists have likewise criticised a perceived lack of editorial discretion when releasing thousands of documents at once and without sufficient analysis. In response to some of the negative reaction, the UN High Commissioner for Human Rights has expressed concern over the "cyber war" against WikiLeaks, and in a joint statement with the Organization of American States the UN Special Rapporteur has called on states and other actors to keep international legal principles in mind. According to journalist Catherine A. Fitzpatrick, WikiLeaks is motivated by "a theory of anarchy," not a theory of journalism or social activism."


View Map + Bookmark Entry

"Born Digital: Guidance for Donors, Dealers, and Archival Repositories" January 2013

In January 2013 archivists and curators at six institutions, including Michael Forstrom at the Beinecke Library, Yale; Susan Thomas at the Bodleian Library, Oxford; Jeremy Leighton John at the British Library; Megan Barnard at the Harry Ransom Center, The University of Texas at Austin; Kate Donovan at the Manuscript, Archives and Rare Book Library (MARBL), Emory University; and Will Hansen and Seth Shaw at the Rubenstein Library, Duke University, published through Media Commons Press Born Digital: Guidance for Donors, Dealers, and Archival Repositories.

View Map + Bookmark Entry

The Library of Congress Has Archived 170 Billion Tweets January 4, 2013

On January 4, 2013 Gayle Osterberg, Director of Communications at the Library of Congress reported in the Library of Congress Blog

"An element of our mission at the Library of Congress is to collect the story of America and to acquire collections that will have research value. So when the Library had the opportunity to acquire an archive from the popular social media service Twitter, we decided this was a collection that should be here.  

"In April 2010, the Library and Twitter [based in San Francisco] signed an agreement providing the Library the public tweets from the company’s inception through the date of the agreement, an archive of tweets from 2006 through April 2010. Additionally, the Library and Twitter agreed that Twitter would provide all public tweets on an ongoing basis under the same terms.

"The Library’s first objectives were to acquire and preserve the 2006-10 archive; to establish a secure, sustainable process for receiving and preserving a daily, ongoing stream of tweets through the present day; and to create a structure for organizing the entire archive by date.

"This month, all those objectives will be completed. We now have an archive of approximately 170 billion tweets and growing. The volume of tweets the Library receives each day has grown from 140 million beginning in February 2011 to nearly half a billion tweets each day as of October 2012.  

"The Library’s focus now is on addressing the significant technology challenges to making the archive accessible to researchers in a comprehensive, useful way. These efforts are ongoing and a priority for the Library.  

"Twitter is a new kind of collection for the Library of Congress but an important one to its mission. As society turns to social media as a primary method of communication and creative expression, social media is supplementing, and in some cases supplanting, letters, journals, serial publications and other sources routinely collected by research libraries.  [Bold face is my addition, JN.]

"Although the Library has been building and stabilizing the archive and has not yet offered researchers access, we have nevertheless received approximately 400 inquiries from researchers all over the world. Some broad topics of interest expressed by researchers run from patterns in the rise of citizen journalism and elected officials’ communications to tracking vaccination rates and predicting stock market activity.

"Attached is a white paper [PDF] that summarizes the Library’s work to date and outlines present-day progress and challenges."


♦♦ To which James Gleick, author of The Information, responded in the New York Review of Books on January 16, 2013 in a blog entry titled Librarians of the Twitterverse, from which I quote this selection:

"For a brief time in the 1850s the telegraph companies of England and the United States thought that they could (and should) preserve every message that passed through their wires. Millions of telegrams—in fireproof safes. Imagine the possibilities for history!  

“ 'Fancy some future Macaulay rummaging among such a store, and painting therefrom the salient features of the social and commercial life of England in the nineteenth century,' wrote Andrew Wynter in 1854. (Wynter was what we would now call a popular-science writer; in his day job he practiced medicine, specializing in 'lunatics.') 'What might not be gathered some day in the twenty-first century from a record of the correspondence of an entire people?'

"Remind you of anything?  

"Here in the twenty-first century, the Library of Congress is now stockpiling the entire Twitterverse, or Tweetosphere, or whatever we’ll end up calling it—anyway, the corpus of all public tweets. There are a lot. The library embarked on this project in April 2010, when Jack Dorsey’s microblogging service was four years old, and four years of tweeting had produced 21 billion messages. Since then Twitter has grown, as these things do, and 21 billion tweets represents not much more than a month’s worth. As of December, the library had received 170 billion—each one a 140-character capsule garbed in metadata with the who-when-where. . . . "

View Map + Bookmark Entry

Titian's Portrait of Girolamo Fracastoro is Rediscovered January 7, 2013

On January 7, 2013 The Guardian newspaper reported that a portrait of the Renaissance physician Girolamo Fracastoro, stored in London's National Gallery since 1924, was attributed to Titian, adding to the National Gallery's great collection of the works of this painter:

"How was this painting misrecognised for so long? When a painting is regarded as not by anyone famous and put in a museum's dark corners, Penny suggests, a self-fulfilling process starts: curators are less likely to examine it, or clean it, or even properly frame it. But in this case fresh eyes, including those of the art historian Paul Joannides, were cast on a forgotten painting and it was taken to the lab to be restored. Discoveries there about the canvas and technique blaze the name Titian.  

"Fracastoro's portrait has been damaged over the centuries, although the new cleaning by the National Gallery has revealed a very characterful face. The background is more problematic and Penny admits its clumsy architecture remains a puzzle.  

"But Titian's genius flares in one fantastic detail that makes this painting – warts and all – truly captivating. "It's not the head that is so amazing in this picture", as Penny puts it, "but the fur."  

"We are feasting our eyes on a flecked mist of white, gold, brown and black, a virtuoso, nearly abstract performance that has all the magic of Titian. With joyous freedom and a casual command of fluffy gossamer colours, the master sensualist has recreated the richness of a lynx fur hung over Fracastoro's shoulders. "The great thing about the lynx is that it has got this brown smudge as well as black and white," enthuses Penny about the animal whose fur Titian so convincingly copied. /He shows me how lynx fur also features in Titian's nearby group portrait of the men of the Vendramin family – lynx was a favourite for rich Venetians.  "Fracostoro worked in Verona, in the empire of the Venetian republic. As well as naming syphilis, he came up with a modern theory of contagion, saying diseases were transmitted by tiny "spores". This was a big advance on the orthodoxy of the time that sicknesses such as plague were caused by bad air.  

"The lynx is an appropriate animal for such a man to sport on his shoulders, for this cat was famous for its eyesight. Italian scientific pioneers including Galileo belonged to the Academy of Lynxes, which associated the creature's eyesight with the pursuit of empirical truth" (http://www.guardian.co.uk/artanddesign/2013/jan/07/titian-painting-rediscovered-national-gallery, accessed 01-09-2013).

A scholarly article on the rediscovery by Jill Dunkerton, Jennifer Fletcher and Paul Joannides entitled "A portrait of ‘Girolamo Fracastoro’ by Titian in the National Gallery" was published in the January 2013 issue of The Burlington Magazine.

View Map + Bookmark Entry

"Information Technology Dividends Outpace All Others" January 11, 2013

"For what appears to be the first time ever, information technology companies in the Standard & Poor’s index of 500 stocks are paying more in dividends than companies in any other sector, S.&P. reported this week. Multimedia

"Off the Charts: High Tech, High Dividends S.&P. Dow Jones Indices reported that in 2012 the technology sector accounted for 14.7 percent of all dividends paid to investors in the 500 companies, up from 10.3 percent in 2011 and from a little over 5 percent back in 2004. It replaced the consumer staples sector, which had been the largest payer of dividends for the previous three years.  

"The change was largely because of the decision by Apple, now the most valuable company in the world, to begin paying dividends last year. The company had been public for more than three decades before it announced plans in March to begin making payouts. Four other technology companies in the index — all but one of which had been public for more than two decades without paying a dividend — later joined in making payments to shareholders.  

"With those changes, 60 percent — 42 — of the 70 technology stocks in the index are now dividend payers. The dividends from many technology companies are relatively small, however, and of the other sectors, only health care comes close to having as large a share of companies that do not pay dividends" (http://www.nytimes.com/2013/01/12/business/information-technology-dividends-surge-past-consumer-staples-sector.html, accessed 01-12-2013).

View Map + Bookmark Entry

The First Use of Quantum Dots in a Mass Produced Consumer Electronics Product--Sony TVs January 14, 2013

"Sony is using nanoscale particles called quantum dots to significantly improve the color of some of its high-end Bravia televisions. It showed off the technology, which increases the range of colors that an LCD television can display by about 50 percent, at the Consumer Electronics Show in Las Vegas this week. This marks the first time that quantum dots—which for a long time have fascinated researchers because of their unusual electronic and optical properties—have been used in a mass-produced consumer electronics product."  

"The product that’s finally coming to market is far different. Sony’s new television is a modified LCD TV. In LCD televisions, each pixel is illuminated from behind by a white backlight, and different colors are created by changing the amount of light allowed to pass through three different filters—one red, one green, and one blue. LCDs originally used fluorescent bulbs as the backlight, but now most use LEDs (marketers call these products LED LCDs). QD Vision uses quantum dots to enhance the LED backlight."

"The new technology is a hit with some industry watchers (one publication named the new Sony KD-65X9000A, one of the TVs to feature the quantum dots, “Best in Show” at CES). Sony is pairing the quantum dot backlighting with other innovations, such as 3-D and and ultra-high 4K resolution, which it hopes will boost sales. Sales of TVs have been flagging.  

"Other quantum dot displays are in the works. For example, last year Nanosys announced it would have a quantum dot backlight product in a notebook in 2013, but it hasn’t disclosed the specific product (see “Quantum Dots Give Notebooks a New Glow”) (MIT TechnologyReview.com, accessed 01-14-2013).  

View Map + Bookmark Entry

The Bexar County, Texas BiblioTech: a Library Devoid of Physical Books January 14 – September 14, 2013

On January 14, 2013 Judge Nelson Wolff, inspired after having read Walter Isaacson's biography of Steve Jobs, announced that Bexar County, Texas will open in Autumn 2013 a public library entirely devoid of any physical books to be called the BiblioTech

"Not all classic library features will be lost among the modern décor: Bexar County promised study spaces, meeting rooms, and a designated interactive children's area inside of the 4,989-square-foot space.

" 'Students who live in this area of Precinct 1 […] have limited resources to complete research, use a computer or simply read a book outside of their school facilities,' Commissioner Sergio Rodriguez said in a statement. 'Once we open BiblioTech this summer, they will have a world of learning available to them all the time.'

"Located inside the Precinct 1 Satellite Offices on Pleasanton Road, the center will be open seven days a week; the county expects a summer launch. The commissioners' proposal includes 100 e-readers for circulation, 50 pre-loaded enhanced e-readers for children, 50 computer stations, 25 laptops, and 25 tablets to use on-site. The collection will include 10,000 current titles to start" (Pcmag.com, accessed 01-14-2013).

Right on schedule, the BiblioTech opened on September 14, 2013.

"Staffers at San Antonio's BiblioTech say it's the first 'bookless library.' And in addition to its catalog of 10,000 e-books, this techy library also provides a digital lifeline to a low-income neighborhood that sorely needs it.

"BiblioTech opened its doors Sept. 14 on the south side of San Antonio, a mostly Hispanic neighborhood where 40% of households don't have a computer and half lack broadband Internet service.

"Although the library houses no printed books -- and members can even skip the visit by checking out its e-books online -- BiblioTech's staff says the library's physical presence is still key to its success.

" 'We're finding that you really have to get your head around a paradigm shift,' said Laura Cole, BiblioTech's special projects coordinator. 'Our digital library is stored in the cloud, so you don't have to come in to get a book. But we're a traditional library in that the building itself is an important community space.' " (http://money.cnn.com/2013/10/08/technology/innovation/bibliotech-ebook-library/, accessed 11-08-2013).

View Map + Bookmark Entry

The Youngest Person to Create a Mobil Game App January 17, 2013

On January 17, 2013 the Philadelphia Tribune announced that Zora Ball, a seven year old first grader at the Harambee Institute of Science and Technology Charter School in Philadelphia, was the youngest person ever to create a full version of a mobile game app. Zora created the app using the Bootstrap programming language. She unveiled the app at the University of Pennsylvania’s “Bootstrap Expo.”

View Map + Bookmark Entry

Online Reviews Used as Attack Weapons to Kill Sales of a Book January 20, 2013

"Reviews on Amazon are becoming attack weapons, intended to sink new books as soon as they are published.

"In the biggest, most overt and most successful of these campaigns, a group of Michael Jackson fans used Facebook and Twitter to solicit negative reviews of a new biography of the singer. They bombarded Amazon with dozens of one-star takedowns, succeeded in getting several favorable notices erased and even took credit for Amazon’s briefly removing the book from sale.  

" 'Books used to die by being ignored, but now they can be killed — and perhaps unjustly killed,' said Trevor Pinch, a Cornell sociologist who has studied Amazon reviews. 'In theory, a very good book could be killed by a group of people for malicious reasons.'

"In 'Untouchable: The Strange Life and Tragic Death of Michael Jackson,' Randall Sullivan writes that Jackson’s overuse of plastic surgery reduced his nose to little more than a pair of nostrils and that he died a virgin despite being married twice. These points in particular seem to infuriate the fans.  

"Outside Amazon, the book had a mixed reception; in The New York Times, Michiko Kakutani called it 'thoroughly dispensable.' So it is difficult to pinpoint how effective the campaign was. Still, the book has been a resounding failure in the marketplace.  

"The fans, who call themselves Michael Jackson’s Rapid Response Team to Media Attacks, say they are exercising their free speech rights to protest a book they feel is exploitative and inaccurate. 'Sullivan does everything he can to dehumanize, dismantle and destroy, against all objective fact,' a spokesman for the group said.  

"But the book’s publisher, Grove Press, said the Amazon review system was being abused in an organized campaign. 'We’re very reluctant to interfere with the free flow of discourse, but there should be transparency about people’s motivations,' said Morgan Entrekin, president of Grove/Atlantic, Grove’s parent company" (http://www.nytimes.com/2013/01/21/business/a-casualty-on-the-battlefield-of-amazons-partisan-book-reviews.html?hpw&_r=0, accessed 01-21-2013).

View Map + Bookmark Entry

An Innovative Interactive Museum Gallery Space with the Largest Multi-Touch Screen in the United States January 21, 2013

On January 21, 2013 The Cleveland Museum of Art opened Gallery One, an interactive gallery "that blends art, technology and interpretation to inspire visitors to explore the museum’s renowned collections. This revolutionary space features the largest multi-touch screen in the United States, which displays images of over 3,500 objects from the museum’s world-renowned permanent collection. This 40-foot Collection Wall allows visitors to shape their own tours of the museum and to discover the full breadth of the collections on view throughout the museum’s galleries. Throughout the space, original works of art and digital interactives engage visitors in new ways, putting curiosity, imagination and creativity at the heart of their museum experience. Innovative user-interface design and cutting-edge hardware developed exclusively for Gallery One break new ground in art museum interpetation, design and technology"

View Map + Bookmark Entry

The Pew Internet Report on Library Services in the Digital Age January 22, 2013

"Released: Janaury 22, 2013

"Patrons embrace new technologies – and would welcome more. But many still want printed books to hold their central place

"Summary of findings

"The internet has already had a major impact on how people find and access information, and now the rising popularity of e-books is helping transform Americans’ reading habits. In this changing landscape, public libraries are trying to adjust their services to these new realities while still serving the needs of patrons who rely on more traditional resources. In a new survey of Americans’ attitudes and expectations for public libraries, the Pew Research Center’s Internet & American Life Project finds that many library patrons are eager to see libraries’ digital services expand, yet also feel that print books remain important in the digital age.  

"The availability of free computers and internet access now rivals book lending and reference expertise as a vital service of libraries. In a national survey of Americans ages 16 and older:  

" • 80% of Americans say borrowing books is a “very important” service libraries provide.

" • 80% say reference librarians are a “very important” service of libraries.

" • 77% say free access to computers and the internet is a “very important” service of libraries.

"Moreover, a notable share of Americans say they would embrace even wider uses of technology at libraries such as:  

" • Online research services allowing patrons to pose questions and get answers from librarians: 37% of Americans ages 16 and older would “very likely” use an “ask a librarian” type of service, and another 36% say they would be “somewhat likely” to do so.

"• Apps-based access to library materials and programs: 35% of Americans ages 16 and older would “very likely” use that service and another 28% say they would be “somewhat likely” to do so.

" • Access to technology “petting zoos” to try out new devices: 35% of Americans ages 16 and older would “very likely” use that service and another 34% say they would be “somewhat likely” to do so.

" • GPS-navigation apps to help patrons locate material inside library buildings: 34% of Americans ages 16 and older would “very likely” use that service and another 28% say they would be “somewhat likely” to do so.

" • “Redbox”-style lending machines or kiosks located throughout the community where people can check out books, movies or music without having to go to the library itself: 33% of Americans ages 16 and older would “very likely” use that service and another 30% say they would be “somewhat likely” to do so.

" • “Amazon”-style customized book/audio/video recommendation schemes that are based on patrons’ prior library behavior: 29% of Americans ages 16 and older would “very likely” use that service and another 35% say they would be “somewhat likely” to do so." (http://libraries.pewinternet.org/2013/01/22/library-services/, accessed 03-04-2013).

View Map + Bookmark Entry

Jane Austin and Walter Scott Were the Two Most Influential Novelists of the 19th Century: A Discovery Made Through Digital Humanities Research January 26, 2013

"ANY list of the leading novelists of the 19th century, writing in English, would almost surely include Charles Dickens, Thomas Hardy, Herman Melville, Nathaniel Hawthorne and Mark Twain.

"But they do not appear at the top of a list of the most influential writers of their time. Instead, a recent study has found, Jane Austen, author of 'Pride and Prejudice,' and Sir Walter Scott, the creator of 'Ivanhoe,' had the greatest effect on other authors, in terms of writing style and themes.

"These two were 'the literary equivalent of Homo erectus, or, if you prefer, Adam and Eve,' Matthew L. Jockers [of the University of Nebraska-Lincoln] wrote in research published last year. He based his conclusion on an analysis of 3,592 works published from 1780 to 1900. It was a lot of digging, and a computer did it.  

"The study, which involved statistical parsing and aggregation of thousands of novels, made other striking observations. For example, Austen’s works cluster tightly together in style and theme, while those of George Eliot (a k a Mary Ann Evans) range more broadly, and more closely resemble the patterns of male writers. Using similar criteria, Harriet Beecher Stowe was 20 years ahead of her time, said Mr. Jockers, whose research will soon be published in a book, 'Macroanalysis: Digital Methods and Literary History' (University of Illinois Press)" (http://www.nytimes.com/2013/01/27/technology/literary-history-seen-through-big-datas-lens.html?nl=todaysheadlines&emc=edit_th_20130127&_r=0, accessed 01-27-2013).

View Map + Bookmark Entry

The FDA Approves the First Medical Robot for Hospital Use January 26, 2013

"A robot that allows patients to communicate with doctors via a telemedicine system that can move around on its own has just received 510(k) clearance by the FDA (Food and Drug Administration).  

"The robot, called RP-VITA, was created by InTouch Health [Santa Barbara, California] and iRobot [Bedford, Massachusetts] and allows doctors from anywhere in the world to communicate with patients at their hospital bedside via a telemedicine solution through an iPad interface.  

"According to iRobot and InTouch Health, RP-VITA combines the latest from iRobot in autonomous navigation and mobility technology with state-of-the-art telemedicine, and InTouch Health developed telemedicine and electronic health record integration.  

"RP-VITA makes it possible for doctors to have "doctor-to-patient consults, ensuring that the physician is in the right place at the right time and has access to the necessary clinical information to take immediate action."  

The robot is used in ways that scientists have never before seen. In order to not get in the way of other people or objects, it outlines its own environment and utilizes a range of advanced sensors to autonomously move about a crowded space.  

"Irrespective of a doctor's location, using an intuitive iPad® interface allows them to visit patients and communicate with their co-workers with a single click.  

"A clearance from the FDA means that RP-VITA can be used for active patient monitoring in pre-operative, peri-operative, and post-surgical settings, such as prenatal, neurological, psychological, and critical care evaluations and examinations.  

"InTouch Health is selling RP-VITA into the healthcare market as its new top-of-the-line remote presence device." (http://www.medicalnewstoday.com/articles/255457.php, accessed 01-27-2013).

View Map + Bookmark Entry

Part of Library of the Ahmed Baba Institute in Timbuktu is Burned January 28 – January 30, 2013

On January 28, 2013 it was widely reported that the Ahmed Baba Institute of Higher Learning and Islamic Research (CEDRAB) in Timbuktu (Tombouctou), Mali, the repository of 30,000 historic manuscripts from the ancient Muslim world, was set aflame by Islamist fighters.

On the same day Vivienne Walt reported on Time.com that the loss from the fire was far less than total:

"In interviews with TIME on Monday, preservationists said that in a large-scale rescue operation early last year, shortly before the militants seized control of Timbuktu, thousands of manuscripts were hauled out of the Ahmed Baba Institute to a safe house elsewhere. Realizing that the documents might be prime targets for pillaging or vindictive attacks from Islamic extremists, staff left behind just a small portion of them, perhaps out of haste, but also to conceal the fact that the center had been deliberately emptied. “The documents which had been there are safe, they were not burned,” said Mahmoud Zouber, Mali’s presidential aide on Islamic affairs, a title he retains despite the overthrow of the former President, his boss, in a military coup a year ago; preserving Timbuktu’s manuscripts was a key project of his office. By phone from Bamako on Monday night, Zouber told TIME, “They were put in a very safe place. I can guarantee you. The manuscripts are in total security.”

"In a second interview from Bamako, a preservationist who did not want to be named confirmed that the center’s collection had been hidden out of reach from the militants. Neither of those interviewed wanted the location of the manuscripts named in print, for fear that remnants of the al-Qaeda occupiers might return to destroy them.

"That was confirmed too by Shamil Jeppie, director of the Timbuktu Manuscripts Project at the University of Cape Town, who told TIME on Monday night that “there were a few items in the Ahmed Baba library, but the rest were kept away.” The center, financed by the South African government as a favored project by then President Thabo Mbeki, who championed reviving Africa’s historical culture, housed state-of-the-art equipment to preserve and photograph hundreds of thousands of pages, some of which had gold illumination, astrological charts and sophisticated mathematical formulas. Jeppie said he had been enraged by the television footage on Monday of the building trashed, and blamed in part Mali’s government, which he said had done little to ensure the center’s security. “It is really sad and disturbing,” he said.

"When TIME reached Timbuktu’s Mayor Cissé in Bamako late Monday night, he tempered the remarks he had made to journalists earlier in the day, conceding in an interview that, indeed, residents had worked to rescue the center’s manuscripts before al-Qaeda occupied the city last March. Still, he said that while many of the manuscripts had been saved, “they did not move all the manuscripts.” He said he had fled earlier this month after living through months of the Islamists’ rule, a situation he described as a “true catastrophe” and “very, very hard.” He said he expects to fly back home by the weekend on a French military jet. By then, perhaps, the state of Timbuktu’s astonishing historic libraries might be clearer."

On January 30, 2013 an article in Liberation.fr stated that "more than 90%" of the manuscripts at the Ahmed Baba Institute in Timbuktu were saved from destruction.

View Map + Bookmark Entry

"The Human Brain Project" is Launched, with the Goal of Creating a Supercomputer-Based Simulation of the Human Brain January 28, 2013

On January 28, 2013 The European Commission announced funding for The Human Brain Project.

From the press release:

"The goal of the Human Brain Project is to pull together all our existing knowledge about the human brain and to reconstruct the brain, piece by piece, in supercomputer-based models and simulations. The models offer the prospect of a new understanding of the human brain and its diseases and of completely new computing and robotic technologies. On January 28, the European Commission supported this vision, announcing that it has selected the HBP as one of two projects to be funded through the new FET Flagship Program.

''Federating more than 80 European and international research institutions, the Human Brain Project is planned to last ten years (2013-2023). The cost is estimated at 1.19 billion euros. The project will also associate some important North American and Japanese partners. It will be coordinated at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, by neuroscientist Henry Markram with co-directors Karlheinz Meier of Heidelberg University, Germany, and Richard Frackowiak of Centre Hospitalier Universitaire Vaudois (CHUV) and the University of Lausanne (UNIL).

The Swiss Contribution

"Switzerland plays a vital role in the Human Brain Project. Henry Markram and his team at EPFL will coordinate the project and will also be responsible for the development and operation of the project’s Brain Simulation Platform. Richard Frackowiak and his team will be in charge of the project’s medical informatics platform; the Swiss Supercomputing Centre in Lugano will provide essential supercomputing facilities. Many other Swiss groups are also contributing to the project. Through the ETH Board, the Swiss Federal Government has allocated 75 million CHF (approximately 60 million Euros) for the period 2013-2017, to support the efforts of both Henry Markram’s laboratory at EPFL and the Swiss Supercomputing Center in Lugano. The Canton of Vaud will give 35 million CHF (28 million Euros) to build a new facility called Neuropolis for in silico life science, and centered around the Human Brain Project. This building will also be supported by the Swiss Confederation, the Rolex Group and third-party sponsors.

"The selection of the Human Brain Project as a FET Flagship is the result of more than three years of preparation and a rigorous and severe evaluation by a large panel of independent, high profile scientists, chosen by the European Commission. In the coming months, the partners will negotiate a detailed agreement with the Community for the initial first two and a half year ramp-up phase (2013-mid 2016). The project will begin work in the closing months of 2013."

View Map + Bookmark Entry

Making the iPhone 5 Look and Feel Like a Traditional Camera: the gizmon iCa case February 2013

After cell phones cameras became the most popular way of taking pictures, it was probably inevitable that a way would be found to make them look and act like cameras:

"now available for the iPhone 5, the 'gizmon iCa' polycarbonate case transforms your smartphone into a working rangefinder camera. a working shutter button is built into the top of the case - making it easy to capture images without having to pre-load the camera interface app. incorporated with a viewfinder on top of the enclosure - the design helps eliminate glare in direct sunlight, as with an additional lens opening from the flash unit. the case also ships with a second interchangeable section that allows for the fitting of any of the accessory lenses" (http://www.designboom.com/technology/the-gizmon-ica-5-case-for-the-iphone-5/, accessed 02-07-2013).

Gizmon, a division of ADPLUS Co. Ltd, Kumamoto-city, Kumamoto, Japan, also produced a series of ad-one lenses and filters for the iPhone that could be used without the iCA polycarbonate case.

View Map + Bookmark Entry

The First 3D Printing Pen; Drawing Enters the Third Dimension February 2013

On February 21, 2013 at 7:10 AM PST the 3Doodler 3D printing pen project on Kickstarter.com had 12,743 backers who had pledged $1,129.404, drastically exceeding the original goal of raising $30,000, and there were 31 days to go on the fund-raising program. By the time I finished writing this database entry the totals had already increased to 12,801 backers who pledged $1,134,565. Photographs and videos on the websites described the remarkable features of the invention.

"A hand draws a square on a piece of paper–the standard first step for drawing a representation of a cube. But then, instead of drawing a second square on the paper, and connecting the edges with ink, the hand rises up. A plastic material emits from the pen, as the hand “draws,” or sculpts, really, the vertical edges of the cube. Then the hand caps off the cube with edges at the top. The whole structure stays sturdy.

"Drawing has entered the third dimension.

"3-D printing has always been about empowering smaller artisans, about taking what is traditionally the realm of major manufacturers, and bringing some of that power closer to the creators. The journey of 3-D printing, in many ways, has been bringing technology that’s traditionally been too expensive for individuals or even small businesses, and making that (or similar) technology available to the little guys. To wit: one company made a portable 3-D printer that, as of my writing about it in November, only cost a few hundred dollars (see: “3-D Printing on a Budget”).  

"The 3Doodler is far cheaper and easier to use, and though less capable in some ways, it has the curious effect of leapfrogging the technology that it’s descended from. 3-D printers are gaining in cultural mindshare, yet I still have to explain to some people what is meant by such a device (“printing” simply evokes an ironclad image of ink and paper, for many). Most people have never seen one; I’m a professional tech journalist, and I don’t think I’ve ever seen one in person. Yet I’m a click away from dropping $75 on my very own 3Doodler pen. It’s cheap, it’s novel, and I wouldn’t be surprised to see technology like this to have a crossover appeal with DIYers and upscale toy store owners alike.  

"As a result, many people may be introduced to a “3-D printing pen” before they even know what a 3-D printer is to begin with. Though the analogy is accurate–the 3Doodler heats and cools plastic in a controlled way, much like a 3-D printer–I wonder if the company might have more success by breaking with precedent and simply describing the thing as a “sculpture pen,” or something of the sort. I might even call it “the skywriter.”  

"Here is the ultimate democratization of 3-D printing. “If you can scribble, trace or wave a finger in the air you can use a 3Doodler,” explain Wobble Works on their Kickstarter page The clever people of Wobble Works have brought 3-D creation to masses of people who might otherwise not have had access to it. Kudos to them, and I look forward to seeing what kinds of creativity their invention unleashes" (http://www.technologyreview.com/view/511471/a-3-d-printing-pen-wows-kickstarter/?utm_campaign=newsletters&utm_source=newsletter-daily-all&utm_medium=email&utm_content=20130220, accessed 02-21-2013)

The 3Doodler was a project of Boston-based WoobleWorks LLC, an emerging toy and robotics company led by Peter Dilworth and Maxwell Bogue.

View Map + Bookmark Entry

Selling Off Print Media to Allow Fast-Growing Film & Television Assets to Grow Unencumbered by Legacy Print Businesses February 14, 2013

"How toxic have print assets become? This toxic: Media companies have begun to quarantine them.  

"On Wednesday, Time Inc., the largest magazine publisher in the country, found itself at the wrong end of a 10-foot pole. Its corporate parent, Time Warner, which has a broad and lucrative array of entertainment assets, was making plans to spin off much of the tattered print unit in a shotgun marriage with Meredith, a Midwest-based company that was trying to do much the same thing.  

"Under the plan, which is far from final, the two companies would contribute magazines to create a new, publicly held company that would be left to make its own way.  

"In shearing off its print division, Time Warner is following a path laid down by News Corporation, which announced last year that its entertainment assets and print assets would be split into two divisions. Its stock hit a five-year high when the plan was floated last June, and sometime early this summer there will be two companies – Fox Group and News Corporation – that will allow the fast-growing film and television assets to grow unencumbered by legacy print businesses.

"Print publishing may have lost significant currency with consumers and advertisers in a digital age, but investors have a far deeper animus. They see little possibility that the business as a whole will right itself, and they find its lack of growth wanting compared to the cable, television and film businesses that are now the epicenter of the media business.  

"Time Inc. may be baked into the name of Time Warner, but it long ago lost salience as a significant player in the company’s business. Time Inc. earnings dropped 5 percent last year, and the division now contributes less than 12 percent of overall sales at the company. The Time & Life building, an edifice standing tall in the middle of Midtown, was long a revered totem of the publishing business. To people in the industry who came of age back when things were good, Time Inc. was legend, having grown up not just on the force of its journalism but on tales of editors’ offices the size of racquetball courts and liquor carts rumbling through the hall spreading cheer and an aura of privilege.

"But the news of a possible sale of its magazine division came at a time when Time Inc. is laying off some 6 percent of its global work force, and many of those who remained wondered whether their jobs, if they continue to have them, might require them to move to Des Moines, the headquarters of Meredith.  

"It was a bit of a moment for the people at Time Inc. and for the publishing business as a whole. Even though Time Warner has said that it will hang on to Time, Fortune, Sports Illustrated and Money, the profits from those Olympian sounding titles are meager, less than 10 percent of the division. Time Warner is keeping them in part because they might bolt on to a reconceived CNN television network, and in part because, well, no one wanted them" (http://mediadecoder.blogs.nytimes.com/2013/02/14/time-inc-the-unwanted-party-guest-being-pushed-out-the-door/?hp, accessed 02-15-2013).

View Map + Bookmark Entry

"Billboard" Starts to Include YouTube Streams in its Calculation of the Most Popular Songs of the Week February 20, 2013

"Billboard and Nielsen announced today the addition of U.S. YouTube video streaming data to its platforms, which includes an update to the methodology for the Billboard Hot 100, the preeminent singles chart.

"The YouTube streaming data is now factored into the chart's ranking, enhancing a formula that includes Nielsen's digital download track sales and physical singles sales; as well as terrestrial radio airplay, on-demand audio streaming, and online radio streaming, also tracked by Nielsen.  

"Billboard is now incorporating all official videos on YouTube captured by Nielsen's streaming measurement, including Vevo on YouTube, and user-generated clips that utilize authorized audio into the Hot 100 and the Hot 100 formula-based genre charts – Hot Country Songs, Hot R&B/Hip-Hop Songs, R&B Songs, Rap Songs, Hot Latin Songs, Hot Rock Songs and Dance/Electronic Songs – to further reflect the divergent platforms for music consumption in today's world.

"The most notable YouTube-influenced title this week is viral sensation 'Harlem Shake' by producer Baauer, which debuts at No. 1 on both the Hot 100 and Streaming Songs charts and jumps 12-1 on Dance/Electronic Songs with 103 million views, according to YouTube. According to Nielsen, the "Harlem Shake" arrival also benefits from viral video-influenced sales of 262,000 downloads. That sales sum alone, good for a No. 3 ranking on Hot Digital Songs, would have placed the track within the top 15 on the Hot 100 without the inclusion of YouTube streams into the calculation" (http://www.billboard.com/articles/news/1549399/hot-100-news-billboard-and-nielsen-add-youtube-video-streaming-to-platforms, accessed 02-21-2013).

View Map + Bookmark Entry

Nielsen to Measure Television Viewing on Internet and Mobile Devices February 21, 2013

"For media executives, there may be nothing worse than a viewer or listener who is not counted.  

"On Thursday, in a move that might help ease those concerns, Nielsen said that it would start considering Americans who have spurned cable, but who have a television set hooked up to the Internet, as “television households,” potentially adding to the sample of homes that are rated by the company, the standard for television ratings. In front of skeptical network officials, the company pledged to measure TV viewership on iPads and other mobile devices in the future.  

"Those executives have a gnawing feeling that their consumers are being missed more and more often. As new pipelines open up for viewers and listeners through social media, mobile apps and game consoles, advertisers fret that they don’t know how many people are really seeing their ads, television networks fear they’re not getting credit for getting those people to tune in and record companies wonder how they can keep up with all the ways their customers consume music. 

"These problems will only worsen in the years to come as new technologies further erase the boundaries that once existed between television and Internet; newspaper and cable news network; video and article" (http://mediadecoder.blogs.nytimes.com/2013/02/21/tvs-connected-to-the-internet-to-be-counted-by-nielsen/?hp, accessed 02-21-2013).

View Map + Bookmark Entry

Drone Pilots Experience Stress Possibly Greater than Actual Combat Pilots February 23, 2013

"In the first study of its kind, researchers with the Defense Department have found that pilots of drone aircraft experience mental health problems like depression, anxiety and post-traumatic stress at the same rate as pilots of manned aircraft who are deployed to Iraq or Afghanistan.

"The study affirms a growing body of research finding health hazards even for those piloting machines from bases far from actual combat zones.  

“ 'Though it might be thousands of miles from the battlefield, this work still involves tough stressors and has tough consequences for those crews,' said Peter W. Singer, a scholar at the Brookings Institution who has written extensively about drones. He was not involved in the new research.  

"That study, by the Armed Forces Health Surveillance Center, which analyzes health trends among military personnel, did not try to explain the sources of mental health problems among drone pilots.  

"But Air Force officials and independent experts have suggested several potential causes, among them witnessing combat violence on live video feeds, working in isolation or under inflexible shift hours, juggling the simultaneous demands of home life with combat operations and dealing with intense stress because of crew shortages. 'Remotely piloted aircraft pilots may stare at the same piece of ground for days,' said Jean Lin Otto, an epidemiologist who was a co-author of the study. 'They witness the carnage. Manned aircraft pilots don’t do that. They get out of there as soon as possible.'  

"Dr. Otto said she had begun the study expecting that drone pilots would actually have a higher rate of mental health problems because of the unique pressures of their job.  

"Since 2008, the number of pilots of remotely piloted aircraft — the Air Force’s preferred term for drones — has grown fourfold, to nearly 1,300. The Air Force is now training more pilots for its drones than for its fighter jets and bombers combined. And by 2015, it expects to have more drone pilots than bomber pilots, although fighter pilots will remain a larger group.

"Those figures do not include drones operated by the C.I.A. in counterterrorism operations over Pakistan, Yemen and other countries" (http://www.nytimes.com/2013/02/23/us/drone-pilots-found-to-get-stress-disorders-much-as-those-in-combat-do.html?hpw&_r=0, accessed 02-23-2013).

View Map + Bookmark Entry

PBS Digital Studios: Will 3D Printing Change the World? February 28, 2013

Much attention has been paid to 3D Printing lately, with new companies developing cheaper and more efficient consumer models that have wowed the tech community. They herald 3D Printing as a revolutionary and disruptive technology, but how will these printers truly affect our society? Beyond an initial novelty, 3D Printing could have a game-changing impact on consumer culture, copyright and patent law, and even the very concept of scarcity on which our economy is based. From at-home repairs to new businesses, from medical to ecological developments, 3D Printing has an undeniably wide range of possibilities which could profoundly change our world.

View Map + Bookmark Entry

Smartphone Interactive Reading Device Will Track Eyes to Scroll Pages March 4, 2013

A much-anticipated new smartphone by Samsung, the South Korean multinational conglomerate headquartered in Samsung Town, Seoul, purports to incorporate a radically new interactive reading device:

"Samsung’s next big smartphone, to be introduced this month, will have a strong focus on software. A person who has tried the phone, called the Galaxy S IV, described one feature as particularly new and exciting: Eye scrolling.

"The phone will track a user’s eyes to determine where to scroll, said a Samsung employee who spoke on condition of anonymity because he was not authorized to speak to the news media. For example, when users read articles and their eyes reach the bottom of the page, the software will automatically scroll down to reveal the next paragraphs of text.

"The source would not explain what technology was being used to track eye movements, nor did he say whether the feature would be demonstrated at the Galaxy S IV press conference, which will be held in New York on March 14. The Samsung employee said that over all, the software features of the new phone outweighed the importance of the hardware.

"Samsung’s booth at this year’s Mobile World Congress. Indeed, Samsung in January filed for a trademark in Europe for the name “Eye Scroll” (No. 011510674). It filed for the “Samsung Eye Scroll” trademark in the United States in February, where it described the service as “Computer application software having a feature of sensing eye movements and scrolling displays of mobile devices, namely, mobile phones, smartphones and tablet computers according to eye movements; digital cameras; mobile telephones; smartphones; tablet computers" (http://bits.blogs.nytimes.com/2013/03/04/samsungs-new-smartphone-will-track-eyes-to-scroll-pages/?hp, accessed 03-05-2013).

When I wrote this entry in March 2013 the Wikipedia article on Samsung stated that Samsung Electronics was the "world's largest information technology company" measured by 2012 revenues. It had retained the number one position since 2009. It was also the world's largest producer of mobile phones, and the world's second largest semiconductor producer after Intel Corporation.

View Map + Bookmark Entry

The Historic Vatican Library to be Digitized in 2.8 Petabytes March 7, 2013

On March 7, 2013 EMC Corporation, headquartered in Hopkinton, MA, announced that it will support the Vatican Apostolic Library in digitizing its catalogue of 80,000 historic manuscripts and 8,900 incunabula as part of EMC’s Information Heritage Initiative. The project will result in 40 million pages of digital reproductions. "The first phase of the nine-year project will provision 2.8 petabytes of storage, utilizing a range of industry-leading solutions from EMC including Atmos®, Data Domain®, EMC Isilon®, NetWorker® and VNX®."

View Map + Bookmark Entry

Time Warner Spins off its Print Media Division, Time Inc. March 13, 2013

". . . Only days before, Time Warner announced that it was spinning off its struggling magazine division, after failing to reach a deal to sell many of Time Inc.’s magazines to the Meredith Corporation. And the high-wattage party, with Mr. Stengel as one of the hosts, seemed like just the kind of lavish expense that Time Inc. might have to leave behind as it confronts the steep financial challenges buffeting the magazine industry.  

"The new magazine company is expected to start with $500 million to $1 billion in debt, in contrast to the publishing company that the News Corporation will spin off this summer, which will have no debt. Circulation and advertising revenue at Time Inc. have suffered sharp declines. In the three months that ended Dec. 31, revenue fell 7 percent, to $967 million, while revenue at Time Warner’s cable channels has soared. After the split occurs, Time Inc. will no longer have the lucrative film and television assets to prop it up.  'It’s sort of put up or shut up time,' Mr. Stengel acknowledged. 'I think great, let’s really test that hypothesis that people will pay for great content and great journalism. We can now invest our own capital.'  

"Time Inc. executives hope that they can build a company that can pour its profits into helping its magazines transition into the digital age, rather than hand them back to the parent company. They also hope that their new independent structure will let them restore the journalistic vision created by the founder, Henry Luce.  

"Analysts tracking the magazine industry point out that even though Time Inc.’s profits have declined in recent years, the newly created company will remain by far the biggest player in the business. On its own, Time Inc. generates one-quarter of the revenue produced by the nation’s top 50 magazines, according to data tracked by John Harrington, a magazine industry consultant.  

"He said that Time owned four of the nation’s top 10 revenue-generating magazines — People, Sports Illustrated, Time and InStyle. Together they produce $3.1 billion of the $6.379 billion generated by the nation’s top 10 grossing magazines, he estimated. People alone brings in $1.4 billion.   'Time Inc. as a whole is still the biggest force in magazine publishing,' Mr. Harrington said.'“They’re an attractive group of magazines.'  

"The announcement of the spinoff last week at least provided some clarity to nervous Time Inc. employees. On Jan. 30, Time Inc. said it would lay off 6 percent of its global work force, about 500 employees. Two weeks later, Time Warner announced it was in talks with Meredith, leaving those who had kept their jobs to nervously await word of the fate of their magazine, and whether they might have to relocate to Meredith’s headquarters in Iowa.  

"Several current and former Time Inc. employees spoke about the unease at the magazines, requesting anonymity so they could publicly discuss private conversations. 'This is for the most part a really nice place to work and people are happy to know that it will stay intact,' said a current Time Inc. executive. 'The layoffs were really hard. The uncertainty on the heels of the layoffs made it particularly painful. Some people were really nervous about this Meredith idea'  

"A former company executive who is still in touch with many employees said, 'Morale dipped dramatically when the layoffs occurred just a couple of months ago. No merit increases were given. Bonuses were extremely low. Then rumors spread Meredith was going to purchase the magazines and morale dipped. Generally people are really pleased that Time Inc. is going to be given the opportunity to survive on its own' " (http://www.nytimes.com/2013/03/14/business/media/spinoff-of-time-inc-rattles-employees.html?hpw, accessed 03-13-2013).

View Map + Bookmark Entry

eBooks Represented 22.55% of U.S. New Book Sales in 2012 March 28, 2013

According to the Association of American Publishers monthly StatShot issued on March 28, 2013, ebooks made up 22.55% of U.S. trade publishers' book sales in 2012—an increase from 17% in 2011 and just 3% of book sales in 2009. 

View Map + Bookmark Entry

"The Reading Brain in the Digital Age: The Science of Paper versus Screens" April 11, 2013

On April 11, 2013 scientificamerican.com, the online version of Scientific American magazine, published "The Reading Brain in the Digital Age: The Science of Paper versus Screens" by Ferris Jabr. From this I quote a portion:

"Before 1992 most studies concluded that people read slower, less accurately and less comprehensively on screens than on paper. Studies published since the early 1990s, however, have produced more inconsistent results: a slight majority has confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens. And recent surveys suggest that although most people still prefer paper—especially when reading intensively—attitudes are changing as tablets and e-reading technology improve and reading digital books for facts and fun becomes more common. In the U.S., e-books currently make up between 15 and 20 percent of all trade book sales.

"Even so, evidence from laboratory experiments, polls and consumer reports indicates that modern screens and e-readers fail to adequately recreate certain tactile experiences of reading on paper that many people miss and, more importantly, prevent people from navigating long texts in an intuitive and satisfying way. In turn, such navigational difficulties may subtly inhibit reading comprehension. Compared with paper, screens may also drain more of our mental resources while we are reading and make it a little harder to remember what we read when we are done. A parallel line of research focuses on people's attitudes toward different kinds of media. Whether they realize it or not, many people approach computers and tablets with a state of mind less conducive to learning than the one they bring to paper.

" 'There is physicality in reading,' says developmental psychologist and cognitive scientist Maryanne Wolf of Tufts University, 'maybe even more than we want to think about as we lurch into digital reading—as we move forward perhaps with too little reflection. I would like to preserve the absolute best of older forms, but know when to use the new.' "

View Map + Bookmark Entry

Google Introduces "Google Glass" Explorer Edition April 15, 2013

On April 15, 2013 Google introduced Google Glass, an optical head-mounted display (OHMD) wearable computer. The augmented reality device displays information in a smartphone-like hands-free format. Wearers communicate with the Internet via natural language voice commands. Google started selling Google Glass to qualified "Glass Explorers" in the US on April 15, 2013 for a limited period for $1,500, before it became available to the public on May 15, 2014 for the same price.

View Map + Bookmark Entry

The Digital Public Library of America is Launched April 18, 2013

On April 2, 2013 the Digital Library of America (DPLA) announced that it would would be launched on April 18, 2013. The vehicle for the announcement was an article by cultural historian and director of Harvard University Libraries Robert Darnton entitled "The National Digital Public Library is Launched!" published in The New York Review of Books.

Darton's article is of interest not only for what it says about the Digital Library of America but also for its comments on other digital libraries in the U.S. I quote representative selections:

"The Digital Public Library of America, to be launched on April 18, is a project to make the holdings of America’s research libraries, archives, and museums available to all Americans—and eventually to everyone in the world—online and free of charge. How is that possible? In order to answer that question, I would like to describe the first steps and immediate future of the DPLA. But before going into detail, I think it important to stand back and take a broad view of how such an ambitious undertaking fits into the development of what we commonly call an information society.  

"Speaking broadly, the DPLA represents the confluence of two currents that have shaped American civilization: utopianism and pragmatism. The utopian tendency marked the Republic at its birth, for the United States was produced by a revolution, and revolutions release utopian energy—that is, the conviction that the way things are is not the way they have to be. When things fall apart, violently and by collective action, they create the possibility of putting them back together in a new manner, according to higher principles.  

"The American revolutionaries drew their inspiration from the Enlightenment—and from other sources, too, including unorthodox varieties of religious experience and bloody-minded convictions about their birthright as free-born Englishmen. Take these ingredients, mix well, and you get the Declaration of Independence and the Bill of Rights—radical assertions of principle that would never make it through Congress today.  

"Yet the revolutionaries were practical men who had a job to do. When the Articles of Confederation proved inadequate to get it done, they set out to build a more perfect union and began again with a Constitution designed to empower an effective state while at the same time keeping it in check. Checks and balances, the Federalist Papers, sharp elbows in a scramble for wealth and power, never mind about slavery and slave wages. The founders were tough and tough-minded.

"How do these two tendencies converge in the Digital Public Library of America? For all its futuristic technology, the DPLA harkens back to the eighteenth century. What could be more utopian than a project to make the cultural heritage of humanity available to all humans? What could be more pragmatic than the designing of a system to link up millions of megabytes and deliver them to readers in the form of easily accessible texts?  

"Above all, the DPLA expresses an Enlightenment faith in the power of communication. Jefferson and Franklin—the champion of the Library of Congress and the printer turned philosopher-statesman—shared a profound belief that the health of the Republic depended on the free flow of ideas. They knew that the diffusion of ideas depended on the printing press. Yet the technology of printing had hardly changed since the time of Gutenberg, and it was not powerful enough to spread the word throughout a society with a low rate of literacy and a high degree of poverty.  

"Thanks to the Internet and a pervasive if imperfect system of education, we now can realize the dream of Jefferson and Franklin. We have the technological and economic resources to make all the collections of all our libraries accessible to all our fellow citizens—and to everyone everywhere with access to the World Wide Web. That is the mission of the DPLA.

"Put so boldly, it sounds too grand. We can easily get carried away by utopian rhetoric about the library of libraries, the mother of all libraries, the modern Library of Alexandria. To build the DPLA, we must tap the can-do, hands-on, workaday pragmatism of the American tradition. Here I will describe what the DPLA is, what it will offer to the American public at the time of its launch, and what it will become in the near future.  

"How to think of it? Not as a great edifice topped with a dome and standing on a gigantic database. The DPLA will be a distributed system of electronic content that will make the holdings of public and research libraries, archives, museums, and historical societies available, effortlessly and free of charge, to readers located at every connecting point of the Web. To make it work, we must think big and begin small. At first, the DPLA’s offering will be limited to a rich variety of collections—books, manuscripts, and works of art—that have already been digitized in cultural institutions throughout the country. Around this core it will grow, gradually accumulating material of all kinds until it will function as a national digital library.  

"The trajectory of its development can be understood from the history of its origin—and it does have a history, although it is not yet three years old. It germinated from a conference held at Harvard on October 1, 2010, a small affair involving forty persons, most of them heads of foundations and libraries. In a letter of invitation, I included a one-page memo about the basic idea: “to make the bulk of world literature available to all citizens free of charge” by creating “a grand coalition of foundations and research libraries.” In retrospect, that sounds suspiciously utopian, but everyone at the meeting agreed that the job was worth doing and that we could get it done.  We also agreed on a short description of it, which by now has become a mission statement. The DPLA, we resolved, would be “an open, distributed network of comprehensive online resources that would draw on the nation’s living heritage from libraries, universities, archives, and museums in order to educate, inform, and empower everyone in the current and future generations.”  

"Sounds good, you might say, but wasn’t Google already providing this service? True, Google set out bravely to digitize all the books in the world, and it managed to create a gigantic database, which at last count includes 30 million volumes. But along the way it collided with copyright laws and a hostile suit by copyright holders. Google tried to win over the litigants by inviting them to become partners in an even larger project. They agreed on a settlement, which transformed Google’s original enterprise, a search service that would display only short snippets of the books, into a commercial library. By purchasing subscriptions, research libraries would gain access to Google’s database—that is, to digitized copies of the books that they had already provided to Google free of charge and that they now could make available to their readers at a price to be set by Google and its new partners. To some of us, Google Book Search looked like a new monopoly of access to knowledge. To the Southern Federal District Court of New York, it was riddled with so many unacceptable provisions that it could not stand up in law.  

"After the court’s decision on March 23, 2011, to reject the settlement,* Google’s digital library was effectively dead, although Google can continue to use its database for other purposes, such as agreements with publishers to provide digital copies of their books to customers. The DPLA was not designed to replace Google Book Search; in fact, the designing had begun long before the court’s decision. But the DPLA took inspiration from Google’s bold attempt to digitize entire libraries, and it still hopes to win Google over as an ally in working for the public good. Nonetheless, you might raise another objection: Who authorized this self-appointed group to undertake such an enterprise in the first place?  

"Answer: no one. We believed that it required private initiative and that it would never get off the ground if we waited for the government to act. Therefore, we appointed a steering committee, a secretariat located in the Berkman Center at Harvard, and six groups scattered around the country, which began to study and debate key issues: governance, finance, technological infrastructure, copyright, the scope and content of the collections, and the audience to be envisioned.  

"The groups grew and developed a momentum of their own, drawing on voluntary labor; crowdsourcing (the practice of appealing for contributions to an undefined group, usually an online community, as in the case of Wikipedia); and discussion through websites, listservs, open meetings, and highly focused workshops. Hundreds of people became actively involved, and thousands more participated through an endless, noisy debate conducted on the Internet. Plenary meetings in Washington, D.C., San Francisco, and Chicago drew large crowds and a much larger virtual audience, thanks to texting, tweeting, streaming, and other electronic connections. There gradually emerged a sense of community, twenty-first-century style—open, inchoate, virtual, yet real, because held together as a body by an electronic nervous system built into the Web.  

"This virtual and real discussion took place while groups got down to work. Forty volunteers submitted “betas”—prototypes of the software that the DPLA might use, which were then to be subjected to “beta testing,” a user-based form of review. After several rounds of testing and reworking, a platform was developed that will provide links to content from library collections throughout the country and that will aggregate their metadata—i.e., catalog-type information that identifies digital files and describes their content. The metadata will be aggregated in a repository located in what the designers call the “back end” of the platform, while an application programming interface (API) in the “front end” will make it possible for all kinds of software to transmit content in diverse ways to individual users.  

"The user-friendly interface will therefore enable any reader—say, a high school student in the Bronx—to consult works that used to be stored on inaccessible shelves or locked up in treasure rooms—say, pamphlets in the Huntington Library of Los Angeles about nullification and secession in the antebellum South. Readers will simply consult the DPLA through its URL, http://dp.la/. They will then be able to search records by entering a title or the name of an author, and they will be connected through the DPLA’s site to the book or other digital object at its home institution. The illustration on page 4 shows what will appear on the user’s screen, although it is just a trial mock-up. //Meanwhile, several of the country’s greatest libraries and museums—among them Harvard, the New York Public Library, and the Smithsonian—are prepared to make a selection of their collections available to the public through the DPLA. Those works will be accessible to everyone online at the launch on April 18, but they are only the beginning of aggregated offerings that will grow organically as far as the budget and copyright laws permit.  

"Of course, growth must be sustainable. But the greatest foundations in the country have expressed sympathy for the project. Several of them—the Sloan, Arcadia, Knight, and Soros foundations in addition to the National Endowment for the Humanities and the Institute of Museum and Library Services—have financed the first three years of the DPLA’s existence. If a dozen foundations combined forces, allotting a set amount from each to an annual budget, they could create the digital equivalent of the Library of Congress within a decade. And the sponsors naturally hope that the Library of Congress also will participate in the DPLA. . . .

"Forty states have digital libraries, and the DPLA’s service hubs—seven are already being developed in different parts of the country—will contribute the data those digital libraries have already collected to the national network. Among other activities, these service hubs will help local libraries and historical societies to scan, curate, and preserve local materials—Civil War mementos, high school yearbooks, family correspondence, anything that they have in their collections or that their constituents want to fetch from trunks and attics. As it develops, digital empowerment at the grassroots level will reinforce the building of an integrated collection at the national level, and the national collection will be linked with those of other countries.  

"The DPLA has designed its infrastructure to be interoperable with that of Europeana, a super aggregator sponsored by the European Union, which coordinates linkages among the collections of twenty-seven European countries. Within a generation, there should be a worldwide network that will bring nearly all the holdings of all libraries and museums within the range of nearly everyone on the globe. To provide a glimpse into this future, Europeana and the DPLA have produced a joint digital exhibition about immigration from Europe to the US, which will be accessible online at the time of the April 18 launch.  

"Of course, expansion, at the local or global level, depends on the ability of libraries and other institutions to develop their own digital databases—a long-term, uneven process that requires infusions of money and energy. As it takes place, great stockpiles of digital riches will grow up in locations scattered across the map. Many already exist, because the largest research libraries have already digitized enormous sections of their collections, and they will become content hubs in themselves. . . .

"How will such material be put to use? I would like to end with a final example. About 14 million students are struggling to get an education in community colleges—at least as many as those enrolled in all the country’s four-year colleges and universities. But many of them—and many more students in high schools—do not have access to a decent library. The DPLA can provide them with a spectacular digital collection, and it can tailor its offering to their needs. Many primers and reference works on subjects such as mathematics and agronomy are still valuable, even though their copyrights have expired. With expert editing, they could be adapted to introductory courses and combined in a reference library for beginners."

On April 18 the founding Executive Director of the  DPLA, Dan Cohen, a history professor and formerly director of the Roy Rosenzweig Center for History and New Media at George Mason University, Fairfax, Virginia, posted a Welcome to the site.

View Map + Bookmark Entry

How the "The Brazen Bibliophiles of Tumbuktu" Saved Manuscripts from Terrorists April 25, 2013

On April 25, 2013 New Republic magazine published "The Brazen Bibliophiles of Timbuktu. How a team of sneaky librarians duped Al Qaeda" by Yochi Dreazen. This illustrated article combined issues of terrorism, political reporting, librarianship and preservation of information. From it I quote selections:

"One afternoon in March, I walked through Timbuktu’s Ahmed Baba Institute of Higher Studies and Islamic Research, stepping around shards of broken glass. Until last year, the modern concrete building with its Moorish-inspired screens and light-filled courtyard was a haven for scholars drawn by the city’s unparalleled collection of medieval manuscripts. Timbuktu was once the center of a vibrant trans-Saharan network, where traders swapped not only slaves, salt, gold, and silk, but also manuscripts—scientific, artistic, and religious masterworks written in striking calligraphy on crinkly linen-based paper. Passed down through generations of Timbuktu’s ancient families, they offer a tantalizing history of a moderate Islam, in which scholars argued for women’s rights and welcomed Christians and Jews. Ahmed Baba owned a number of Korans and prayer books decorated with intricate blue and gold-leaf geometric designs, but its collections also included secular works of astronomy, medicine, and poetry.

"This vision of a philosophical, scientific Islam means little to the Al Qaeda–linked Islamist group Ansar Dine, which for most of last year ruled Timbuktu through terror, cutting off the hands of thieves, flogging women judged to be dressed immodestly, and destroying centuries-old tombs of local saints. In the summer, the militants commandeered Ahmed Baba, using it as a headquarters and barracks. Then, in January, French forces closed in on Timbuktu. As the Islamists fled, they trashed the library, burning as many of the manuscripts as they could find. The mayor of Timbuktu, Hallé Ousmani Cissé, told The Guardian that all of Ahmed Baba’s texts had been lost. “It’s true,” he said. “They have burned the manuscripts. . . .

”Asking around about the manuscripts’ destruction, however, I heard different rumors. Find Abdel Kader Haidara, people told me. He could tell you more about what happened. So, in Bamako, Mali’s capital 400 miles to the south, I visited Haidara, an unassuming man with a shy smile, a neatly groomed mustache, and a healthy paunch under the flowing robes traditional to Malian men. Sitting cross-legged on the floor of the modest apartment where he now lives, Haidara told me the improbable story of what actually happened to Timbuktu’s manuscripts. 'It was only a matter of time before the Islamists found them,' he said matter-of-factly, passing dark worry beads between his fingers. 'I had to get them out.' . . .

"As the militias poured into his city, Haidara knew he had to do something to protect the approximately 300,000 manuscripts in different libraries and homes in and around Timbuktu. Haidara had spent years traveling around the country negotiating with Mali’s ancient families to assemble thousands of texts for the Ahmed Baba Institute, which was founded in 1973 as the city’s first official preservation organization. 'When I thought of something happening to the manuscripts, I couldn’t sleep,' he told me later.

"The initial wave of invaders were secular Tuareg, but quickly the Islamist militia Ansar Dine asserted control, imposing a harsh regime of sharia in Timbuktu and other northern cities. The Islamists didn’t know, at first, about the manuscripts. But their indiscriminate cruelty and their tight-fisted control over the city meant that the texts had to be hidden—and fast. Haidara thought the manuscripts would be most secure in the homes of Timbuktu’s old families, where, after all, they had been protected for centuries. He assembled a small army of custodians, archivists, tour guides, secretaries, and other library employees, as well as his own brothers and cousins and other men from the manuscript-holding families, and began organizing an evacuation plan.

"Starting in early May, every morning before sunrise, while the militants were still asleep, Haidara and his men would walk to the city’s libraries and lock themselves inside. Until the heat cleared the streets in the afternoon, the men would find their way through the darkened buildings and wrap the fragile manuscripts in soft cloths. They would then pack them into metal lockers roughly the size of large suitcases, as many as 300 in each. At night, they’d sneak back to the libraries, traveling by foot to avoid checkpoints on the road, pick up the lockers, and carry them, swathed in blankets, to the homes of dozens of the city’s old families. The entire operation took nearly two months, but by July, they had stowed 1,700 lockers in basements and hideaways around the city. And they did it just in time, because not long after, the militants moved into the Ahmed Baba Institute, using its elegant rooms to store canned vegetables and bags of white rice. Haidara fled to Bamako, hoping the Islamists’ ignorance about the texts would keep them safe. . . . "

View Map + Bookmark Entry

On the Twentieth Anniversary CERN Restores the First Website April 30, 2013

On April 30, 1993 CERN, Geneva, Switzerland, published documents which released the World Wide Web software into the public domain.

"To mark the [twentieth] anniversary of the publication of the document that made web technology free for everyone to use, CERN is starting a project to restore the first website and to preserve the digital assets that are associated with the birth of the web. To learn more about the project and the first website, visit http://first-website.web.cern.ch"

"This project aims to preserve some of the digital assets that are associated with the birth of the web. For a start we would like to restore the first URL - put back the files that were there at their earliest possible iterations. Then we will look at the first web servers at CERN and see what assets from them we can preserve and share. We will also sift through documentation and try to restore machine names and IP addresses to their original state. Beyond this we want to make http://info.cern.ch - the first web address - a destination that reflects the story of the beginnings of the web for the benefit of future generations."

View Map + Bookmark Entry

The World's Smallest Movie April 30, 2013

Screen shot from world's smallest movie: "A Boy and His Atom," by IBM.

On April 30, 2013 scientists at IBM Almaden Research Center, San Jose, California unveiled and mounted on YouTube what they called "the world's smallest movie," which tracks the movement of atoms magnified 100 million times. When I viewed the motion picture on the morning of May 1, 2013 it had already been viewed 84,000 times.

The video, A Boy and his Atom depicts a boy named Atom who befriends a single atom and follows him on a journey of dancing and bouncing that helps explain the science behind data storage. Using techniques it honed after years of researching atomic data storage, IBM created 250 stop-motion frames depicting a boy playing with his (pet? toy?) atom.

To manipulate single atoms in this way IBM used its two-ton scanning-tunnelling microscope, which operates at minus 450 degrees Fahrenheit. The microscope moved a "super-sharp" needle to within 1 nanometer of a copper surface, which then could attract and physically move each atom, one by one.  

"Capturing, positioning and shaping atoms to create an original motion picture on the atomic-level is a precise science and entirely novel," said Andreas Heinrich, a scientist at IBM Research" (http://news.discovery.com/tech/nanotechnology/atom-stars-worlds-smallest-movie-130501.htm, accessed 05-01-2013).

Along with the world's smallest movie, IBM also posted a highly informative documentary on the science and technology involved in making the movie entitled Moving Atoms: Making the World's Smallest Movie.

View Map + Bookmark Entry

Flash Marketing of E-Books May 2013

In 2013 Flash Marketing was especially effective in selling e-books:

"One Sunday this month, the crime thriller 'Gone, Baby, Gone,' by Dennis Lehane, sold 23 e-book copies, a typically tiny number for a book that was originally published in 1998 but has faded into obscurity.

"The next day, boom: it sold 13,071 copies.  

'Gone, Baby, Gone' had been designated as a Kindle Daily Deal on Amazon, and hundreds of thousands of readers had received an e-mail notifying them of a 24-hour price cut, to $1.99 from $6.99. The instant bargain lit a fire under a dormant title.  

"Flash sales like that one have taken hold in the book business, a concept popularized by the designer fashion site Gilt.com. Consumers accustomed to snapping up instant deals for items like vintage glassware on One Kings Lane or baby clothes on Zulily are now buying books the same way — and helping older books soar from the backlist to the best-seller list.  

“ 'It’s the Groupon of books,' said Dominique Raccah, the publisher of Sourcebooks. 'For the consumer, it’s new, it’s interesting. It’s a deal and there isn’t much risk. And it works.'

"Finding a book used to mean scouring the shelves at a bookstore, asking a bookseller for guidance or relying on recommendations from friends.  

"But bookstores are dwindling, leaving publishers with a deep worry about the future of the business: with fewer brick-and-mortar options, how will readers discover books?  

"One-day discounts are part of the answer. Promotions like the Kindle Daily Deal from Amazon and the Nook Daily Find from Barnes & Noble have produced extraordinary sales bumps for e-books, the kind that usually happen as a result of glowing book reviews or an author’s prominent television appearances.  

"Web sites like BookBub.com, founded last year, track and aggregate bargain-basement deals on e-books, alerting consumers about temporary discounts from retailers like Amazon, Apple, Kobo and Barnes & Noble.  'It makes it almost irresistible,' said Liz Perl, Simon & Schuster’s senior vice president for marketing. 'We’re lowering the bar for you to sample somebody new.'

"E-books are especially ripe for price experimentation. Without the list price stamped on the flap like their print counterparts, e-books have freed publishers to mix up prices and change them frequently. Some newly released e-books cost $14.99, others $9.99 and still others $1.99.  

Consumers are flocking to flash sales, said Russ Grandinetti, Amazon’s vice president for Kindle content, because the deals whittle down the vast number of choices for reading and other forms of entertainment.

" 'In a world of abundance and lots of choice, how do we help people cut through?' Mr. Grandinetti said. 'People are looking for ways to offer their authors a megaphone, and we’re looking to build more megaphones.'

"Mr. Grandinetti said one book, '1,000 Recordings to Hear Before You Die,' was selling, on average, less than one e-book a day on Amazon. After it was listed as a Kindle Daily Deal last year, it sold 10,000 copies in less than 24 hours.  

"Some titles have tripled that number: on a single day in December, nearly 30,000 people snapped up digital copies of “Under the Dome,” by Stephen King, a novel originally published in 2009 by Scribner. For publishers and authors, having a book chosen by a retailer as a daily deal can be like winning the lottery, an instant windfall of sales and exposure" (http://www.nytimes.com/2013/05/27/business/media/daily-deals-propel-older-e-books-to-popularity.html?hp, accessed 05-27-2013).

View Map + Bookmark Entry

Using 100 Linked Computers and Artificial Intelligence to Re-Assemble Fragments from the Cairo Genizah May 2013

For years I have followed computer applications in the humanities. Some, such as From Cave Paintings to the Internet, are on a small personal scale. Others involve  enormous corpora of data, as in computational linguistics, where larger seems always to be better.

The project called "Re-joining the Cairo Genizah", a joint venture of Genazim, The Friedberg Genizah Project, founded in 1999 in Toronto, Canada, and The Blavatnik School of Computer Science at Tel-Aviv University, seems potentially to be one of the most productive large scale projects currently underway.  Because about 320,000 pages and parts of pages from the Genizah — in Hebrew, Aramaic, and Judeo-Arabic (Arabic transliterated into Hebrew letters) — are scattered in 67 libraries and private collections around the world, only a fraction of them have been collated and cataloged. Though approximately 200 books were published on the Genizah manuscripts by 2013, perhaps only 4,000 of the manuscripts were pieced together through a painstaking, expensive, exclusive process that relied a lot on luck.

In 2013 the Genazim project  was underway to collate and piece together as many of these fragments as could be re-assembled using current computing technology:

"First there was a computerized inventory of 301,000 fragments, some as small as an inch. Next came 450,000 high-quality photographs, on blue backgrounds to highlight visual cues, and a Web site where researchers can browse, compare, and consult thousands of bibliographic citations of published material.  

"The latest experiment involves more than 100 linked computers located in a basement room at Tel Aviv University here, cooled by standup fans. They are analyzing 500 visual cues for each of 157,514 fragments, to check a total of 12,405,251,341 possible pairings. The process began May 16 and should be done around June 25, according to an estimate on the project’s Web site.  

"Yaacov Choueka, a retired professor of computer science who runs the Friedberg-financed Genazim project in Jerusalem, said the goals are not only to democratize access to the documents and speed up the elusive challenge of joining fragments, but to harness the computer’s ability to pose new research questions. . . .

"Another developing technology is a 'jigsaw puzzle' feature, with touch-screen technology that lets users enlarge, turn and skew fragments to see if they fit together. Professor Choueka, who was born in Cairo in 1936, imagines that someday soon such screens will be available alongside every genizah collection. And why not a genizah-jigsaw app for smartphones?

“ 'The thing it really makes possible is people from all walks of life, in academia and out, to look at unpublished material,' said Ben Outhwaite, head of the Genizah Research Unit at Cambridge University, home to 60 percent of the fragments. 'No longer are we going to see a few great scholarly names hoarding particular parts of the genizah and have to wait 20 years for their definitive publication. Now everyone can dive in.'

"What they will find goes far beyond Judaica. . . . Marina Rustow, a historian at Johns Hopkins University, said about 15,000 genizah fragments deal with everyday, nonreligious matters, most of them dated 950 to 1250. From these, she said, scholars learned that Cairenes imported sheep cheese from Sicily — it was deemed kosher — and filled containers at the bazaar with warm food in an early version of takeout" (http://www.nytimes.com/2013/05/27/world/middleeast/computers-piecing-together-jigsaw-of-jewish-lore.html?pagewanted=2&hp, accessed 05-27-2013)

View Map + Bookmark Entry

The First 3D Printed Bionic Organ: An Ear May 1, 2013

On May 1, 2013  Manu S. Mannoor and Ziwen Jiang of the Department of Mechanical and Aerospace Engineering at Princeton, Teena James from the Department of Chemical and Biomolecular Engineering at Johns Hopkins, and others published a letter entitled "3D Printed Bionic Ears," in NANO Letters of the American Chemical Society. In this they described and illustrated the first 3D printed bionic organ: an ear. 

"Using 3D printers to create biological structures has become widespread. Printing electronics has made similar advances, particularly for low-cost, low-power disposable items. The first successful combination of these two technologies has recently been reported by a group of researchers at Princeton. They described their methods in a recent issue of ACS NANO Letters. They claim that their new device can receive a wide range of frequencies using a coiled antenna printed with silver nanoparticles. Interfacing their device to actual nerve is the next obvious step, begging the question — can it actually hear?  

"The Princeton researchers previously developed a tattoo composed of a sensor and an antenna that could be fixed to the surface of a tooth. It was made from a combination of silk, gold, and graphene, and had the ability to detect small amounts of bacteria. Building on their knowledge, that team joined up with researchers at Johns Hopkins to build the electronic ear. Their 3D printer combined calf cells with a hydrogel matrix material to form the ear cartilage, and silver to form the embedded antenna coil.

"In testing, they were able to pick up radiowaves in stereo using complimentary left and right side ears. Later on they hope to be able to detect acoustic energy directly using other built-in sensors. There are many ways this might be accomplished, the trick is to find a pressure-sensitive material that can be easily printed. Other researchers have used 3D printing of a material called carbomorph to create piezoresistive sensors that change resistance when bent or stressed. These researchers have also been able to print capacitive button sensors to measure changes in capacitance, and even connectors for hooking things together.  

"Printing biological structures that will be stable over time is a tricky business. The first stable 3D-printed ear was achieved not too long ago by researchers at Cornell using a very similar method. Since then, advances in bioprinting have progressed to ever smaller scales, culminating recently with a technique called 3D microdroplet printing. Using synthetic cell microdroplets, researchers could lay down geometrically precise tissues composed of human stem cells. These droplets could then undergo secondary developmental changes to their structure.  

"The bionic ear is a long way from something that might be used in humans, if that is even the intent of the authors. Successful printing of organs and tissues larger than just a cartilaginous ear will require supporting elements for bloodflow and nervous enervation. A test device for printed tissues and organs that might include these essential primitives will undoubtedly be needed soon. It may eventually come to resemble some kind of living proto-humanoid machine — and would probably be a little creepy-looking. However, asking lab animals to shoulder our test burden, may hopefully soon no longer be necessary" (http://www.extremetech.com/extreme/154893-researchers-create-worlds-first-3d-printed-bionic-organ, accessed 05-27-2013).

View Map + Bookmark Entry

The First NeuroGaming Conference Takes Place May 1 – May 2, 2013

On May 1-2, 2013 the first NeuroGaming Conference and Expo took place at the YetiZen Innovation Lab, 540 Howard St., San Francisco. It was organized by Zack Lynch, founder of the Neurotechnology Industry Organization. Three hundred people attended.

View Map + Bookmark Entry

E-Books Account for About 20% of Trade Book Sales May 15, 2013

According to BookStats, the Center for Publishing Market Data:

"eBooks are now fully embedded in the format infrastructure of Trade book publishing. The consistent growth of eBooks demonstrates that publishers have successfully evolved the technology environment for their content – more so than other historically print-based content industries. eBooks grew 45% since 2011 and now constitute 20% of the Trade market, playing an integral role in 2012 Trade revenue. The most pivotal driver of eBooks remains Adult Fiction, with Children’s/Young Adult also showing strong numbers" (http://bookstats.org/pdf/BookStats-Press-Release-2013-highlights.pdf, accessed 11-08-2013)

View Map + Bookmark Entry

The Size of the Digital Universe in 2013 and Prediction of its Growth Rate June 2013

"Because of smartphones, tablets, social media sites, e-mail and other forms of digital communications, the world creates 2.5 quintillion bytes of new data daily, according to I.B.M.

"The company estimates that 90 percent of the data that now exists in the world has been created in just the last two years. From now until 2020, the digital universe is expected to double every two years, according to a study by the International Data Corporation" (http://www.nytimes.com/2013/06/09/us/revelations-give-look-at-spy-agencys-wider-reach.html?hpw, accessed 06-08-2013).

View Map + Bookmark Entry

The NSA Mines Metadata Rather than the Content of Telecommunication June 2013

"The [National Security Agency (NSA)] agency’s ability to efficiently mine metadata, data about who is calling or e-mailing, has made wiretapping and eavesdropping on communications far less vital, according to data experts. That access to data from companies that Americans depend on daily raises troubling questions about privacy and civil liberties that officials in Washington, insistent on near-total secrecy, have yet to address.

“ 'American laws and American policy view the content of communications as the most private and the most valuable, but that is backwards today,' said Marc Rotenberg, the executive director of the Electronic Privacy Information Center, a Washington group. 'The information associated with communications today is often more significant than the communications itself, and the people who do the data mining know that.'

"In the 1960s, when the N.S.A. successfully intercepted the primitive car phones used by Soviet leaders driving around Moscow in their Zil limousines, there was no chance the agency would accidentally pick up Americans. Today, if it is scanning for a foreign politician’s Gmail account or hunting for the cellphone number of someone suspected of being a terrorist, the possibilities for what N.S.A. calls 'incidental' collection of Americans are far greater.

"United States laws restrict wiretapping and eavesdropping on the actual content of the communications of American citizens but offer very little protection to the digital data thrown off by the telephone when a call is made. And they offer virtually no protection to other forms of non-telephone-related data like credit card transactions"(http://www.nytimes.com/2013/06/09/us/revelations-give-look-at-spy-agencys-wider-reach.html?hp, accessed 06-09-2013).

View Map + Bookmark Entry

Launching of "Founders Online" June 13, 2013

On June 13, 2013 the National Archives issued the beta release of Founders Online, a database consisting of over 119,000 searchable documents, fully annotated, representing the correspondence and other writings of six major shapers of the United States: George Washington, Benjamin Franklin, John Adams (and family), Thomas Jefferson, Alexander Hamilton, and James Madison.

View Map + Bookmark Entry

The U.S. Supreme Court Rules that Genes Cannot be Patented June 13, 2013

On June 13, 2013 the US Supreme Court in Association for Molecular Pathology et al v. Myriad Genetics, unanimously struck down the patents held by Myriad Genetics of Salt Lake City, Utah, on the DNA comprising BRCA1 and BRCA2. In their abnormal forms these two genes dispose women to a dramatically heightened risk of breast and/or ovarian cancer.  Myriad Genetics had located the two genes, extracted them from the chromosomes housing them, and had obtained the patents on the genes once they were isolated from the human body. 

"The patents controlled by Myriad entitled the company to exclude all others from using the isolated DNA in breast cancer research, diagnostics, and treatment. The plaintiffs—who originally included biomedical scientists and clinicians, advocates for women’s health, and several women with or at risk for breast cancer—held that Myriad’s enforcement of its patents interfered with the progress of science and the delivery of medical services. They contended that genes, even if isolated, were legally ineligible for patents and that well-established tenets of patent law precluded the grant to any person or institution of a monopoly over a substance so essential to life, health, and science as human DNA" (Kevles, Daniel J. "The Genes you Can't Patent," New York Review of Books, September 26, 2013).

View Map + Bookmark Entry

Amazon Sells More than 25% of New Books in the U.S. July 5, 2013

According to an article by David Streitfeld in the July 5, 2013 issue of The New York Times, Amazon.com now sells

"about one in four new books, and the vast number of independent sellers on its site increases its market share even more. It owns as a separate entity the largest secondhand book network, Abebooks. And of course it has a majority of the e-book market."

The main issue raised in Steitfeld's article, entitled "The Price of Amazon," was Amazon's ability to control the prices of new books and ebooks.

View Map + Bookmark Entry

The History of Typography in Stop-Motion Animation July 8, 2013

On July 8, 2013 Ivan Kander of TheAtlantic.com published an interview with Ben Barrett-Forrest of Forrest Media of Whitehorse, Yukon and Hamilton, Ontario, including Barrett-Forrest's stop-motion video:

"Yukon-based designer Ben Barrett-Forrest has crafted this charming stop-motion history lesson to help you get up to speed. Built with 2454 photographs, 291 letters, and 140 hours of his life, Barrett-Forrest’s animated short is a delight . As he guides us from the lowly beginnings of Guttenberg’s printing press, all the way to the computer age, it becomes apparent that the art of type is a corollary for history. Like architecture and fashion, typography is a reflection of the world in which it’s created. Barrett-Forrest explains his interest in type and the genesis of the project in an interview below."

View Map + Bookmark Entry

Apple Illegally Conspired to With Publishers to Try to Raise the Prices of eBooks July 10, 2013

Judge Denise L. Cote of United States District Court, Southern District of New York, ruled on July 10, 2013 that Apple Computer had illegally conspired with five of the six biggest book publishers to try to raise prices in the e-books (ebooks) market. This verdict, which was immediately perceived to benefit Amazon.com, was expected to cause further consolidation in the publishing industry

"The Apple case, which was brought by the Justice Department, will have little immediate impact on the selling of books. The publishers settled long ago, protesting they had done nothing wrong but saying they could not afford to fight the government. But it might be a long time before they try to take charge of their fate again in such a bold fashion. Drawing the attention of the government once was bad enough; twice could be a disaster.  

" 'The Department of Justice has unwittingly caused further consolidation in the industry at a time when consolidation is not necessarily a good thing,' said Mark Coker, the chief executive of Smashwords, an e-book distributor. 'If you want a vibrant ecosystem of multiple publishers, multiple publishing methods and multiple successful retailers in 5, 20 or 50 years, we took a step backwards this week.'

"Some in publishing suspected that Amazon had prompted the government to file its suit. The retailer has denied it, but it still emerged the big winner. While Apple will be punished — damages are yet to be decided — and the publishers were chastened, Amazon is left free to exert its dominance over e-books — even as it gains market share with physical books. The retailer declined to comment on Wednesday" (http://www.nytimes.com/2013/07/11/business/e-book-ruling-gives-amazon-an-advantage.html?hp, accessed 07-11-2013).

View Map + Bookmark Entry

Rare Comic Books Unwittingly Destroyed to Make Papier Mache Sculpture July 10, 2013

As aspect of the digital revolution that continues to fascinate me is the very high prices paid for rare comic books, which were, incidentally, usually printed on poor quality paper, and read out of existence, so that even if their initial printing was large, their survival rate in fine condition is very low.  Nevertheless, not everyone appreciates their value. On July 10, 2013 www.web.orange.col.uk reported  that English artist Andew Vickers used discarded comic books to make a life-sized sculpture of a man that he called Paperboy.

"However, after the piece went on display at a gallery in Sheffield it was spotted by comic book expert Steve Eyre.

"It was while examining the piece that he realised the pasted sections were from classic Marvel Comics books.

"Indeed, one of the comics used was a rare 1963 first edition of The Avengers, a copy of which Mr Eyre also owns, worth around £10,000.

"It means that it would have been cheaper for Mr Vickers to create the sculpture out of Italian marble rather than the comic books.

"The artist says he never imagined the comics had any value when he retrived them from the skip and added: 'There's no point crying over spilt milk' (http://web.orange.co.uk/article/quirkies/Artist_turned_rare_comics_into_papier_mache, accessed 07-11-2013).

View Map + Bookmark Entry

The First Fully Online MIDS Degree Program July 17, 2013

Responding to the national shortage of data scientists, on July 17, 2013 the University of California, Berkeley’s School of Information (I School) today announced the launch of the country’s first fully online Master of Information and Data Science (MIDS) degree program.

“ 'This new degree program is in response to a dramatically growing need for well-trained big-data professionals who can organize, analyze and interpret the deluge of often messy and unorganized data available from the web, sensor networks, mobile devices and elsewhere,' said AnnaLee Saxenian, dean of the I School. 

"The United States may soon face a shortage of people who can connect the dots using the massive amounts of data critical today in finance, energy, health care and other fields, according to a 2011 McKinsey Institute report.

“ 'These new professionals need an assortment of skills ranging from math, programming, communication to management, statistics, engineering and social sciences, not to mention a deep curiosity and an ability to translate technical jargon into everyday English,' Saxenian added.

"By 2018, the U.S. may face a shortage of up to 190,000 people who have the analytical skills — and another 1.5 million managers and analysts with the know-how — to make wise use of virtual mountain ranges of data for critical decisions in business, energy, intelligence, health care, finance, and other fields, said the McKinsey Institute in the June 2011 report, “'Big data: The next frontier for innovation, competition and productivity.' "

View Map + Bookmark Entry

Sixty Percent of Book Sales, Print & Digital, Now Occur Online July 19, 2013

An article entitled, "Here's How Amazon Self-Destructs," published in Salon.com on July 19, 2013 pointed out that by forcing the closure of most physical bookstores in the United States Amazon had eliminated the main way that readers learn about new books--that is by visiting physical bookshops:

"According to survey research by the Codex Group, roughly 60 percent of book sales — print and digital — now occur online. But buyers first discover their books online only about 17 percent of the time. Internet booksellers specifically, including Amazon, account for just 6 percent of discoveries." 

View Map + Bookmark Entry

Crossbar Memory can Store a Terabyte on a Postage-Stamp Sized Chip August 2013

In August 2013 Technologyreview.com announced that a new type of memory chip developed by Wei Lu of the University of Michigan, and under development at Crossbar Inc. in Santa Clara, California, can store a terabyte of information on a postage-stamp sized CMOS compatible chip. Crossbar memory allows data storage at about 40 times the density of flash memory, and it is faster and more energy efficient.

View Map + Bookmark Entry

Bezos Purchases the Washington Post August 5, 2013

On August 5, 2013 Amazon.com founder Jeffrey Bezos agreed to purchase the Washington Post newspaper for $250 million.

"The Post, like the newspaper industry as a whole, has been beset by a rapid decline in print advertising, a loss of subscribers and challenges in building up online revenue.  

"In a letter to Post employees, Bezos indicated that he wouldn't make radical changes in editorial operations and would continue to emphasize accountability journalism. But he said the paper will need to "invent" and to "experiment," focusing on the Internet and tailored content, to address the changing habits of readers.  

"Bezos, 49, was on nobody's list of likely entrants into print media.

" 'This is the first time a true digital native is buying a newspaper publishing company,' said Alan D. Mutter, a media consultant and former newspaper editor. 'Jeff Bezos has the means, motive and opportunity to re-envision what it means to be a newspaper in the digital era.' 

"Bezos will own the Post outright, buying it with his own money, not Amazon's. By taking it private, he won't be subject to shareholders seeking quick returns.  

"Bezos, a Princeton University graduate, founded Amazon in 1994 as an online book seller. He quickly added other services and built it into the world's biggest online retailer, with $61 billion in sales last year and 97,000 employees worldwide" (http://www.latimes.com/business/la-fi-washington-post-bezos-20130806,0,4515179.story, accessed 08-07-2013).

♦ Shortly after Bezos's purchase, the most meaningful commentary on it that I read was Arianna Huffington's "Bezos, Heraclitus and the Hybrid Future of Journalism," published on August 14, 2013 in The Blog at The Huffington Post. Huffington, of Greek descent, who was influential in transforming journalism by forming and developing The Huffington Post, began her comments with the following paragraphs:

"One of the first people who came to mind when I heard the news last week that Jeff Bezos was buying the Washington Post was a fellow countryman, the Greek philosopher Heraclitus, who around 2,500 years ago said, 'No man ever steps in the same river twice.' Or, as James Fallows put it, the sale was 'one of those episode-that-encapsulates-an-era occurrences.' But as it encapsulates one era that has passed, it also has the potential to expand the era we are in. This combining of the best of traditional media with the boundless potential of digital media represents an amazing opportunity.  

"First, it's an opportunity to further move the conversation away from the future of newspapers to the future of journalism -- in whatever form it's delivered. After all, despite all the dire news about the state of the newspaper industry, we are in something of a golden age of journalism for news consumers. There's no shortage of great journalism being done, and there's no shortage of people hungering for it. And there are many different business models being tried to connect the former with the latter -- and Jeff Bezos will no doubt come up with another.  

"The future will definitely be a hybrid one, combining the best practices of traditional journalism -- fairness, accuracy, storytelling, deep investigations -- with the best tools available to the digital world -- speed, transparency, and, above all, engagement.  

"Though the distinction between new media and old has become largely meaningless, for too long the reaction of much of the old media to the fast-growing digital world was something like the proverbial old man yelling at the new media kids to get off his lawn. Many years were wasted erecting barriers that were never going to stand."

View Map + Bookmark Entry

A New Software Ecosystem to Program SyNAPSE Chips August 8, 2013

August 8, 2013 Dharmendra S. Modha, senior manager and principal investigator at the Cognitive Computing Group at IBM Almaden Research Center, unveiled a new software ecosystem to program SyNAPSE chips, which "have an architecture inspired by the function, low power, and comptact volume of the brain." 

“ 'We are working to create a FORTRAN for synaptic computing chips. While complementing today’s computers, this will bring forth a fundamentally new technological capability in terms of programming and applying emerging learning systems.'

"To advance and enable this new ecosystem, IBM researchers developed the following breakthroughs that support all aspects of the programming cycle from design through development, debugging, and deployment: 

"-         Simulator: A multi-threaded, massively parallel and highly scalable functional software simulator of a cognitive computing architecture comprising a network of neurosynaptic cores.  

"-         Neuron Model: A simple, digital, highly parameterized spiking neuron model that forms a fundamental information processing unit of brain-like computation and supports a wide range of deterministic and stochastic neural computations, codes, and behaviors. A network of such neurons can sense, remember, and act upon a variety of spatio-temporal, multi-modal environmental stimuli. 

"-         Programming Model: A high-level description of a “program” that is based on composable, reusable building blocks called “corelets.” Each corelet represents a complete blueprint of a network of neurosynaptic cores that specifies a based-level function. Inner workings of a corelet are hidden so that only its external inputs and outputs are exposed to other programmers, who can concentrate on what the corelet does rather than how it does it. Corelets can be combined to produce new corelets that are larger, more complex, or have added functionality. 

"-         Library: A cognitive system store containing designs and implementations of consistent, parameterized, large-scale algorithms and applications that link massively parallel, multi-modal, spatio-temporal sensors and actuators together in real-time. In less than a year, the IBM researchers have designed and stored over 150 corelets in the program library.  

"-         Laboratory: A novel teaching curriculum that spans the architecture, neuron specification, chip simulator, programming language, application library and prototype design models. It also includes an end-to-end software environment that can be used to create corelets, access the library, experiment with a variety of programs on the simulator, connect the simulator inputs/outputs to sensors/actuators, build systems, and visualize/debug the results" (http://www-03.ibm.com/press/us/en/pressrelease/41710.wss, accessed 10-20-2013).

View Map + Bookmark Entry

The First Master's Degree Offered through Massive Open Online Courses by a Major University August 17, 2013

On August 17, 2013 The New York Times reported that Georgia Tech, which operates one of the country’s top computer science programs, plans to offer in January 2014 a massive open online course (MOOC) master’s degree in computer science for $6,600 — far less than the $45,000 on-campus price.  

"Zvi Galil, the dean of the university’s College of Computing, expects that in the coming years, the program could attract up to 10,000 students annually, many from outside the United States and some who would not complete the full master’s degree. 'Online, there’s no visa problem,' he said.  

"The program rests on an unusual partnership forged by Dr. Galil and Sebastian Thrun, a founder of Udacity, a Silicon Valley provider of the open online courses.  

"Although it is just one degree at one university, the prospect of a prestigious low-cost degree program has generated great interest. Some educators think the leap from individual noncredit courses to full degree programs could signal the next phase in the evolution of MOOCs — and bring real change to higher education."

"From their start two years ago, when a free artificial intelligence course from Stanford enrolled 170,000 students, free massive open online courses, or MOOCs, have drawn millions and yielded results like the perfect scores of Battushig, a 15-year-old Mongolian boy, in a tough electronics course offered by the Massachusetts Institute of Technology" (http://www.nytimes.com/2013/08/18/education/masters-degree-is-new-frontier-of-study-online.html?hp, accessed 08-18-2013).

View Map + Bookmark Entry

Robots Helping Workers on Automobile Assembly Lines September 2013

By 2013 robots had evolved to the point where it was considered safe for humans to work along side them on assembly lines.  An example cited by MIT Technology Review was BMW's Greer, South Carolina plant where robots made by Universal Robots of Odense, Denmark, were helping workers perform final door assembly: 

" 'The robots are working with a door sealant that keeps sound and water out of the car, and is applied before the door casing is attached. 'It’s pretty heavy work because you have to roll this glue line to the door,' says Stefan Bartscher, head of innovation at BMW. 'If you do that several times a day, it’s like playing a Wimbledon match.'

"According to Bartscher, final assembly robots will not replace human workers; they will extend their careers. 'Our workers are getting older,' Bartscher says. 'The retirement age in Germany just rose from 65 to 67, and I’m pretty sure when I retire it’ll be 72 or something. We actually need something to compensate and keep our workforce healthy, and keep them in labor for a long time. We want to get the robots to support the humans.' In recent years, robot manufacturers have realized that with the right software and safety controls, their products could be made to work in close proximity to humans. As a result, a new breed of more capable workplace robot is rapidly appearing" (http://www.technologyreview.com/news/518661/smart-robots-can-now-work-right-next-to-auto-workers/?utm_campaign=newsletters&utm_source=newsletter-daily-all&utm_medium=email&utm_content=20130917, accessed 09-17-2013).

View Map + Bookmark Entry

New Paradigm for a Regional Library September 3, 2013

On September 3, 2013 the Library of Birmingham, designed by the Dutch architect Francine Houben, opened in the city of Birmingham, England. Upon its opening the post-modernist high tech structure was described as the largest public library in the United Kingdom, the largest public cultural space in Europe, and the largest regional library in Europe.

In November 2013 the library of Birmingham's website had this to say about the new library:

"The Library of Birmingham provides a showcase for the city's internationally important collections of archives, photography and rare books. New facilities including state-of-the-art gallery space open up public access to the collections for the first time. It is also home to a BFI Mediatheque, providing free access to the National Film Archive. Other facilities include a new flexible studio theatre, an outdoor amphitheatre and other informal performance spaces, a recording studio (opening in November 2013), and dedicated spaces for children and teenagers. By harnessing new technology, everyone from Birmingham to Beijing, Bangalore and beyond can access the Library of Birmingham's world-class resources. More than three million visitors are expected each year, and millions more online.
"Described by its architect Francine Houben as a 'people's palace', the Library of Birmingham is highly accessible and family-friendly. It will deliver excellent services through collaboration between the library, The Birmingham Repertory Theatre, partners and communities. It will provide a dynamic mix of events, activities and performance together with outstanding resources, exhibitions and access to expert help for learning, information and culture. As a centre of excellence for literacy, research, study, skills development, entrepreneurship, creative expression, health information and much more, the Library of Birmingham can change people's lives" (http://www.libraryofbirmingham.com/article/About, accessed 11-08-2013).
View Map + Bookmark Entry

1.2 Billion Faces of Facebook Organized on one Page September 26, 2013

On September 26, 2013 designboom.com reported that communication designer Natalia Rojas of Miami, Florida, found a way to bring all 1,258,244,934 (and counting) Facebook users together from around the world into one page with www.app.thefacesoffacebook.com. The interactive project organized every profile picture on the social media website into chronological registration order, which when combined, appeared as an unorganized 1.2 billion person pixelated mosiac. Developed using a combination of different coding programs, the visual experiment showcased a diverse community of users, all in one screen capture.

View Map + Bookmark Entry

Teaching Keyboard Skills in Kindergarten October 2013

By October 2013, in the forty-five states, the District of Columbia, and four territories of the United States that adopted the Common Core State Standards Initiative children as early as kindergarten were learning to use a keyboard—a radical change in the traditional order of teaching handwriting long before typing.      

"A skill that has been taught for generations in middle or high school — first on manual typewriters, then electric word processors and finally on computer keyboards — is now becoming a staple of elementary schools. Educators around the country are rushing to teach typing to children who have barely mastered printing by hand.

"The Common Core standards make frequent references to technology skills, stating that students in every grade should be able to use the Internet for research and use digital tools in their schoolwork to incorporate video, sound and images with writing.

"But the standardized tests linked to the Common Core make those expectations crystal clear because the exams — which will be given in 2014-2015 — require students to be able to manipulate a mouse; click, drag and type answers on a keyboard; and, starting in third grade, write online. Fourteen states have agreed to field-test the exams in the spring to help those creating the tests iron out the wrinkles and make improvements" (http://wapo.st/1ci9YSR, accessed 10-14-2013).

View Map + Bookmark Entry

A Mobile, One-Armed Robot that Costs $35,000 October 2013

In October 2013 Unbounded Robotics, a spinoff formed in January 2013 from Willow Garage in Menlo Park, announced that it will ship the UBR-1 robot in mid-2014 for the low price of $35,000. According to Melonee Wise, the CEO of Unbounded Robotics, the UBR-1 will be "the Model T of robots," suggesting that it will be the first mass-produced and widely sold robot. 

"The UBR1 makes use of the open-source Robot Operating System originally developed at Willow and can be described as a simpler version of Willow’s flagship PR2, a large mobile robot with two arms that sold to research labs for $400,000. Although the PR2 became the basis for projects that pushed the boundaries of robot autonomy (see “Robots That Learn From People” and “TR35: Leila Takayama”), the high price meant that only a handful were sold. Wise says that just 43 PR2s exist in labs around the world today"  (Tom Simonite, "Why This Might Be the Model T of Workplace Robots," Technology Review, October 21, 2013, accessed 10-22-2013)

View Map + Bookmark Entry

Use of the Internet by Part-Time Business Owners in 2013 October 10, 2013

Research released on October 10, 2013 by The Internet Association showed that nine out of ten part-time business owners relied on the Internet to conduct their business.  According to their report Internet enabled part-time businesses employed 6.6 million people and contributed $141 billion to the US Gross National Product (GNP). 

View Map + Bookmark Entry

"As We May Type": Authoring Tools as "Machines for New Thought." October 16, 2013

On October 16, 2013 writer and computer programmerPaul Ford published in MIT Technology Review an article entitled, "As We May Type." The subheading of this article was "New outliners and authoring tools are machines for new thoughts." This article discussed the issue of how new outlining and writing tools impact the human creative writing process. From this I quote a couple of paragraphs:

"Outlines are a kind of mental tree. Say level 1 is a line of text. Then level 1.1 would be subordinate to 1, and 1.1.1 subordinate to 1.1; 1.2, like 1.1, is subordinate to the first line. And so forth. Of course, outlines existed before software. (The philosopher Ludwig Wittgenstein composed an entire book, the Tractatus Logico-Philosophicus, as an outline.)

"But with an outlining program, you don’t need a clumsy numbering system, because the computer does the bookkeeping for you. You can build hierarchies, ideas branching off ideas, with words like leaves. You can hide parts of outlines as you’re working, to keep the document manageable. And on a computer, any element can be exported to another program for another use. Items can become sections in a PhD thesis—or slides in a presentation, or blog posts. Or you could take your outline tree and drop it inside another outline, building a forest."


View Map + Bookmark Entry

Launch of "The Zuckerberg Files" October 25, 2013

On October 25, 2013 Michael Zimmer of the School of Information Studies at the University of Wisconsin-Milwaukee, and Director of the Center for Information Policy Research, launched The Zuckerberg Files, an online archive that attempted to collect every public utterance made by Mark Zuckerberg, the founder and chief executive of Facebook, in an effort to analyze Zuckerberg's evolving response to privacy issues.  The archive included blog posts, magazine interviews, TV appearances, letters to shareholders, public presentations, and other events, from a 2004 interview with The Harvard Crimson to the present.  

View Map + Bookmark Entry

A Multi-Media Version of an Article in the New York Times Magazine October 27, 2013

On October 25, 2013 I enjoyed reading on The New York Times website an illustrated article from the October 27, 2013 issue of the New York Times Magazine. The article by Jeff Himmelman concerning the politics of Ayungin Shoal in the Spratly Islands located in a remote corner of the South China Sea, 105 nautical miles from the Philippines, was entitled "A Game of Shark and Minnow." It was illustrated with photographs and video by Ashley Gilbertson. The presentation of the article involved a fascinating combination of video, sound and text, using full-screen videos and sound captioned with text, interspersed with the text of the story. Though I assume that this was not the first online magazine article to incorporate these features it was the first magazine article that I read on the web to use the features in this manner.  In my opinion the presentation was remarkably effective.

View Map + Bookmark Entry

The Market for Very Young Children's Books Has Grown, Partly in Reaction to eBooks October 27, 2013

"While the publishing industry is still scraping through the digital revolution, children’s books have remained relatively untouched. Most parents are sticking to print for their young children even when there are e-book versions or apps available, and videos like the once ubiquitous “Baby Einstein,” founded in 1997 as a fast-track to infant genius, have fallen out of fashion.

"The American Academy of Pediatrics recommends that television should be avoided for children younger than 2 years old, and studies have suggested that babies and toddlers receive much greater benefit from real interactions than from experiences involving video screens.

“ 'There has been a proliferation of focus on early childhood development on the education side,' said John Mendelson, the sales director at Candlewick Press, 'as well as on the retail side.'

"Board books, traditionally for newborns to 3-year-olds, have always been a smaller and somewhat neglected category in the publishing business, compared with the larger and more expensive hardcover picture books designed for children of reading age.

But board books may be catching up. Libraries that used to shun the genre are now buying them from publishers. Bookstores are making more room for board books on their shelves. And while a board book might have once been too insubstantial a gift to bring to a child’s birthday party, the newer, highly stylized versions (that can run up to $15) would easily pass muster.

“ 'A board book was little more than a teething ring,' said Christopher Franceschelli, who directs Handprint Books, an imprint of Chronicle Books. 'I think as picture books have developed in the last 20 years, parents, librarians, teachers have thought, ‘Why should board books be any less than their older siblings?’ '

"In 2012, Abrams Books, the art-book publisher, created a new imprint, Abrams Appleseed, to focus on books for babies, toddlers and preschoolers. Since then, it has published high-end books like “Pantone: Color Puzzles,” released this month, which uses intricate drawings and puzzle pieces to teach children the differences between colors like peacock blue and nighttime blue" (http://www.nytimes.com/2013/10/27/books/a-library-of-classics-edited-for-the-teething-set.html?hp, accessed 10-27-2013). 

View Map + Bookmark Entry

Zero to Eight: Children's Media Use in America 2013 October 28, 2013

On October 28, 2013 Common Sense Media of San Francisco issued their two year follow-up to their study of October 2011Zero to Eight: Children's Media Use in America 2013 by Vicky Rideout. Key findings in the 2013 report were:

"Children’s access to mobile media devices is dramatically higher than it was two years ago.

"Among families with children age 8 and under, there has been a five-fold increase in ownership of tablet devices such as iPads, from 8% of all families in 2011 to 40% in 2013. The percent of children with access to some type of 'smart' mobile device at home (e.g., smartphone, tablet) has jumped from half (52%) to three-quarters (75%) of all children in just two years.

"Almost twice as many children have used mobile media compared to two years ago, and the average amount of time children spend using mobile devices has tripled.

"Seventy-two percent of children age 8 and under have used a mobile device for some type of media activity such as playing games, watching videos, or using apps, up from 38% in 2011. In fact, today, 38% of children under 2 have used a mobile device for media (compared to 10% two years ago). The percent of children who use mobile devices on a daily basis – at least once a day or more – has more than doubled, from 8% to 17%. The amount of time spent using these devices in a typical day has tripled, from an average of :05 a day among all children in 2011 up to :15 a day in 2013. [Throughout the report, times are presented in hours:minutes format. For example, “1:46” indicates one hour and 46 minutes.] The difference in the average time spent with mobile devices is due to two factors: expanded access, and the fact that those who use them do so for longer periods of time. Among those who use a mobile device in a typical day, the average went from :43 in 2011 to 1:07 in 2013."

View Map + Bookmark Entry

Amazon Launches the Kindle MatchBook Service October 29, 2013

On October 29, 2013 Amazon officially launched its Kindle MatchBook service, allowing customers to buy a heavily discounted Kindle copy of physical books they had purchased from Amazon. Prices ranged between free and $2.99. The e-books could be read on Kindle, Android or iOS applications using the free Kindle app.

Amazon said that 70,000 books were enrolled in MatchBook at launch, that more books would be added to the program every day, and that book detail pages would list when specific titles will support MatchBook.

"Amazon combs through your entire order history going all the way back to 1995, so the initial list of ebooks offered to you may be longer than you'd expect. And since they're full-fledged Kindle copies, all of Amazon's signature features including Whispersync and X-Ray are included. To see which of your past purchases are eligible, head to Amazon now" (http://www.theverge.com/2013/10/29/5042058/amazon-launches-matchbook-offering-cheap-digital-copies-of-print-books, accessed 10-29-2013).

View Map + Bookmark Entry

Writers Explain How New Information Technologies Have Changed Writings October 31, 2013

On October 31, 2013 The New York Times published "Writing Bytes," in which numerous writers described the impact of the Internet, cell phones, and other information technologies on their craft:

"The Internet has changed (and keeps changing) how we live today — how we find love, make money, communicate with and mislead one another. Writers in a variety of genres tell us what these new technologies mean for storytelling."

View Map + Bookmark Entry

The First Auction of Internet Domains by a Major Auction House November 1 – November 21, 2013

On October 24, 2013 Heritage Auctions of Dallas, Texas, announced their first auction of Domain Names and Intellectual Properties conducted by Aron Meystedt of Dallas, owner of the virtual real estate investment firm xf.com, who had been buying, selling, and developing Internet domains since 2009. As far as I was able to determine this was the first auction of Internet domains by a major auction house. The bidding period for the online auction was November 1 to 21, 2013.

View Map + Bookmark Entry

Monkeys Use Brain-Machine Interface to Move Two Virtual Arms with their Brain Activity November 6, 2013

In a study led by neuroscientist Miguel A. L. Nicolelis and the Nicolelis Lab at Duke University, monkeys learned to control the movement of both arms on an avatar using just their brain activity. The findings, published on November 6, 2013 in Science Translational Medicine, advanced efforts to develop bilateral movement in brain-controlled prosthetic devices for severely paralyzed patients, and raised the hope that patients might eventually be able to use brain-machine interfaces (BMIs) to control two arms. To enable the monkeys to control two virtual arms, researchers recorded nearly 500 neurons from multiple areas in both cerebral hemispheres of the animals’ brains, the largest number of neurons recorded and reported to date.

"While the monkeys were moving two hands, the researchers saw distinct patterns of neuronal activity that differed from the activity seen when a monkey moved each hand separately. Through such research on brain–machine interfaces, scientists may not only develop important medical devices for people with movement disorders, but they may also learn about the complex neural circuits that control behavior....

“Simply summing up the neuronal activity correlated to movements of the right and left arms did not allow us to predict what the same individual neurons or neuronal population would do when both arms were engaged together in a bimanual task,” said Nicolelis in a released statement. “This finding points to an emergent brain property – a non-linear summation – for when both hands are engaged at once" (www.technologyreview.com/view/521471/monkeys-drive-two-virtual-arms-with-their-thoughts/, accessed 11-09-2013).

P. J. Ifft, S. Shokur, Z. Li, M. A. Lebedev, M. A. L. Nicolelis,"A Brain-Machine Interface Enables Bimanual Arm Movements in Monkeys", Sci. Transl. Med. 5210ra154 (2013).

View Map + Bookmark Entry

Retail e-commerce sales expanded 15 percent in the U.S in 2012—seven times as fast as traditional retail. November 7, 2013

"Why do some stores succeed while others fail? Retailers constantly struggle with this question, battling one another in ways that change with each generation. In the late 1800s, architects ruled. Successful merchants like Marshall Field created palaces of commerce that were so gorgeous shoppers rushed to come inside. In the early 1900s, mail order became the “killer app,” with Sears Roebuck leading the way. Toward the end of the 20th century, ultra-efficient suburban discounters like Target and Walmart conquered all.

"Now the tussles are fiercest in online retailing, where it’s hard to tell if anyone is winning. Retailers as big as Walmart and as small as Tweezerman.com all maintain their own websites, catering to an explosion of customer demand. Retail e-commerce sales expanded 15 percent in the U.S in 2012—seven times as fast as traditional retail. But price competition is relentless, and profit margins are thin to nonexistent. It’s easy to regard this $186 billion market as a poisoned prize: too big to ignore, too treacherous to pursue.

"Even the most successful online retailer, Amazon.com, has a business model that leaves many people scratching their heads. Amazon is on track to ring up $75 billion in worldwide sales this year. Yet it often operates in the red; last quarter, Amazon posted a $41 million loss. Amazon’s founder and chief executive officer, Jeff Bezos, is indifferent to short-term earnings, having once quipped that when the company achieved profitability for a brief stretch in 1995, “'t was probably a mistake' " (http://www.technologyreview.com/news/520801/no-stores-no-salesmen-no-profit-no-problem-for-amazon/ accessed 11-07-2013).

View Map + Bookmark Entry

The U.S. Postal Service Will Deliver Amazon Packages on Sundays November 11, 2013

In a testimony to the growing impact of e-commerce, on November 11, 2013 the U.S. Postal Service and Amazon.com announced that the Postal Service will deliver packages in the Los Angeles and New York metropolitan areas on Sundays—a first for the postal service, and a service not provided by private delivery companies. According to Amazon's press release, Sunday deliveries will be for Amazon Prime members, "who receive unlimited, free two-day shipping on millions of items."  Amazon and the U.S. Postal Service plan to roll out this service to a large portion of the U.S. population in 2014, including Dallas, Houston, New Orleans and Phoenix.

"Getting packages on Sundays normally is expensive for customers. United Parcel Service Inc. doesn't deliver on Sundays, according to a spokeswoman. And FedEx Corp. said Sunday 'is not a regular delivery day,' though limited options are available.

"The deal could be a boon for the postal service, which has been struggling with mounting financial losses and has been pushing to limit general letter mail delivery to five days a week.

"Spokeswoman Sue Brennan said that letter mail volume is declining 'so extremely,' yet package volume is 'increasing in double-digit percentages.'

"The postal service's Sunday package delivery business has been very small, but the arrangement with Amazon for two of the retailer's larger markets, Los Angeles and New York, should boost work considerably.

"To pull off Sunday delivery for Amazon, the postal service plans to use its flexible scheduling of employees, Brennan said. It doesn't plan to add employees, she said" (http://www.latimes.com/business/la-fi-amazon-usps-20131109,0,7390545.story?track=lat-email-topofthetimes#axzz2kLimvcy6, accessed 11-12-2013) 

View Map + Bookmark Entry

Herpes Virus and Cocaine Found on Widely Borrowed Copy of "Fifty Shades of Grey" November 12, 2013

On November 12, 2013 it was reported that Belgian radio broadcaster Stijn van de Voorde asked asked Jan Tytgat, professor of toxicology and pharmacology at the Catholic University of Leuven, to test the ten most borrowed books at Antwerp public library. Among the findings:

  1. All ten books tested postive for cocaine.
  2. E.L. James's erotic romance novel Fifty Shades of Grey and Pieter Aspe's Tango showed traces of the herpes virus, though in concentrations so low that they did not pose a danger to health. 

Prof. Tytgat commented that "The levels found won't have a pharmacological effect. Your consciousness or behaviour won't change as a result of reading the tomes." However, people tested after reading the book would test positive for cocaine.

Prof Tytgat added that "Today's testing methods are so sensitive that traces of the drug originating from a contaminated book will be found in your hair, blood and urine" (http://www.deredactie.be/cm/vrtnieuws.english/News/131112_50Shades, accessed 11-15-2013).

View Map + Bookmark Entry

Judge Rules that Google Books Does Not Infringe Copyright November 14, 2013

On November 14, 2013 Judge Denny Chin of the United States Court of Appeals for the Second Circuit ruled in the class-auction suit of The Author's Guild, Inc...v. Google, Inc. (case 1.05-cv-08136 Document 1088) that Google Book Search does not infringe copyright.

Judge Chin wrote that Google Book Search "advances the progress of the arts and sciences, while maintaining respectful consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders.Indeed, all society benefits.”

" 'Google’s book search is transformative, he wrote, because 'words in books are being used in a way they have not been used before.' It does not replace books, he wrote, because Google does not allow people to read entire books online. It takes security measures, like not showing one out of every 10 pages in each book, to prevent people from trying to do so' "(http://www.nytimes.com/2013/11/15/business/media/judge-sides-with-google-on-book-scanning-suit.html?hpw&rref=technology,accessed 11-15-2014).

Google was, of course, very pleased with the outcome; The Authors Guild diagreed with the judgment.

"Google said, 'As we have long said, Google Books is in compliance with copyright law and acts like a card catalogue for the digital age, giving users the ability to find books to buy or borrow.'

"The Authors Guild, which represents writers, said it would appeal the decision. 'We disagree with and are disappointed by the court's decision today,' Guild Executive Director Paul Aiken said in a written statement, 'Goodle made unauthorized digital editions of nearly all of the world's valuable copyright protected literature and profits from displaying those works. In our view, such mass digitization and exploitation for exceeds the bounds of the fair-use defense.'

"Google which sells ads arounds it search results have scanned more than 20 million books since 2004...." (Gershman & Trachtenberg, "Google Books Challenge is Rejected," Wall Street Journal, November 1, 2013). 

View Map + Bookmark Entry

A Genetic Link to Skin Cancer is Found by Data Mining of Patient Records November 24, 2013

In a paper published in Nature Biotechnology on November 24, 2013 thirty-six researchers lead by Joshua Denny, associate professor of biomedical informatics and medicine at Vanderbilt University, showed that data mining of electronic patient records is more cost-effective and faster than comparing the genomes of thousands of people with a disorder to the genomes of who people who don't have the disorder.

"To identify previously unknown relationships between disease and DNA variants, Denny and colleagues grouped around 15,000 billing codes from medical records into 1,600 disease categories. Then, the researchers looked for associations between disease categories and DNA data available in each record.

"Their biggest new findings all involved skin diseases (just a coincidence, says Josh Denny, the lead author): non melanoma skin cancer and two forms of skin growths called keratosis, one of which is pre-cancerous. The team was able to validate the connection between these conditions and their associated gene variants in other patient data.

"Unlike the standard method of exploring the genetic basis of disease, electronic medical records (EMRs) allows researchers to look for genetic associations of many different diseases at once, which could lead to a better understanding of how some single genes may affect multiple characteristics or conditions. The approach may also be less biased than disease-specific studies.

"The study examined 13,000 EMRs, but in the future, similar studies could look benefit from much larger data sets. While not all patient records contain the genetic data needed to drive this kind of research, that is expected to change now that DNA analysis has become faster and more affordable in recent years and more and more companies and hospitals offer genetic analysis as part of medical care. When researchers have millions of EMRs at their finger tips, more subtle and complex effects of genes on disease and health could come to light. For example, it could allow for important studies on the genetics of drug side effects, which can be rare, affecting maybe 1 in 10,000 patients, Denny says" (http://www.technologyreview.com/view/521986/genetic-link-to-skin-cancer-found-in-medical-records/, accessed 11-25-2013).

Denny et al, "Systematic comparison of phenome-wide association study of electronic medical record data and genome-wide association study data," Nature biotechnology (2013)doi:10.1038/nbt.2749 

View Map + Bookmark Entry

Mainstream Publisher Simon & Schuster Launches Erotica Social Media Site to Promote Books December 2013

In December 2013 Simon & Schuster UK has announced the launch of a genre-specific social media and blog site, meant to entice avid readers of its New Adult and Romance titles. Called The Hot Bed, and curated by an in-house team of four Simon & Schuster UK professionals, the site aimed to be the source for news about S&S’s top-selling and most well-loved romance authors, as well as a blog for those authors to make appearances.

"Fans will also get in on the action as they can connect with authors and with one-another via the site. The Twitter and Facebook feeds for the new site will allow fans to engage with the storylines and offer their remarks. The site also offers Tumblr, Pinterest, and YouTube landings.

"As the site continues to grow, new program offerings will be announced, including reader events for fans. The site has also promised future efforts to promote the brand through connectedness with other presumably romance brands."

View Map + Bookmark Entry

Evolution of "Apple" from Primarily a Computer Manufacturer to Primarily a Phone and Tablet Company December 2013

By the end of 2013 Apple, which abandoned the word "computer" in its official name in early 2007, was only incidentally a personal computer manufacturer. In the third quarter of 2013 Mac revenue was $5.6 billion, or just 15% of the companies total revenue. At this point, Apple was primarily a smartphone company. In the same quarter, iPhone revenue topped $19.5 billion, accounting for 52% of the total for the three months.

Source: http://www.computerworld.com/s/article/9244875/Dark_tower_Mac_Pro_goes_on_sale_Thursday_?source=CTWNLE_nlt_wktop10_2013-12-20, accessed 12-20-2013.

View Map + Bookmark Entry

"Top Ten New Libraries of 2013" December 2013

In December 2013 designboom.com architecture, based in Milan, Italy, issued their illustrated list of the TOP 10 libraries of 2013.

View Map + Bookmark Entry

Amazon.com and UPS Envision Eventually Delivering Packages Via Drones December 1, 2013

In an interview on December 1, 2013 Jeff Bezos, founder and chief executive of Amazon.com, outlined how he envisioned using drones to deliver packages in as little as 30 minutes. Declaring himself an optimist, he predicted that delivery drones could be a reality in as little as five years. 

Bezos intends to be among the first to use such technology once the Federal Aviation Administration finalizes rules for commercial drones later this decade. Current regulations forbid companies from flying unmanned vehicles. 

On December 3, 2013, The Washington Post, owned by Bezos, displayed an excellent information graphic on Bezos's proposed delivery drone at this link.

On December 8, 2013 The New York Times published an article entitled "Disruptions: At Your Door in Minutes, Delivered by Robot." The article also stated that UPS (United Parcel Service) was researching the use of drones for future delivery services:

"But given the explosive growth of e-commerce, some experts say the shipping business is in for big changes. United Parcel Service, which traces its history to 1907, delivers more than four billion packages and documents a year. It operates a fleet of more than 95,000 vehicles and 500 aircraft. The ubiquitous Brown is a $55 billion-plus-a-year business. And, like Amazon, U.P.S. is reportedly looking into drones. So is Google. More and more e-commerce companies are making a point of delivering things quickly the old-fashioned way — with humans.

"Some of the dreamers in the technology industry are dreaming even bigger. It won’t be just drones, they insist. Robots and autonomous vehicles — think Google’s driverless car — could also disrupt the delivery business.

“As cities become more automated, you’re going to start to see on-demand delivery systems that look like small delivery vehicles and can bring you whatever you want to wherever you are,” said Bryant Walker Smith, a fellow at the Center for Internet and Society at Stanford Law School and a member of the Center for Automotive Research at Stanford. “Rather than go to the store to buy some milk, a robot or drone will go to a warehouse and get it for you, then deliver it.”

View Map + Bookmark Entry

Introduction of "Arches": an Open-source, Web-based, Geospatial Information System for Cultural Heritage Inventory and Management December 4, 2013

On December 4, 2013 The Getty Conservation Institute (GCI) and World Monuments Fund (WMF) and Farallon Geographics announced the public release of Arches 1.0—an open-source, web-based geospatial information system (GIS) for cultural heritage inventory and management, built specifically to help heritage organizations safeguard cultural heritage sites worldwide.  

"By incorporating a broad range of international standards, Arches meets a critical need in terms of gathering, making accessible and preserving key information about cultural heritage. “Knowing what you have is the critical first step in the conservation process. Inventorying heritage assets is a major task and a major investment,” said Bonnie Burnham, President and CEO of World Monuments Fund.

"Cultural heritage inventories are difficult to establish and maintain. Agencies often rely on costly proprietary software that is frequently a mismatch for the needs of the heritage field or they create custom information systems from scratch. Both approaches remain problematic and many national and local authorities around the world are struggling to find resources to address these challenges. The GCI and WMF have responded to this need by partnering to create Arches, which is available at no cost. Arches can present its user interface in any language or in multiple languages, and is configurable to any geographic location or region. It is web-based to provide for the widest access and requires minimal training.

"The system is freely available for download from the Internet so that institutions may install it at any location in the world. “Our hope is that by creating Arches we can help reduce the need for heritage institutions to expend scarce resources on creating systems from the ground up, and also alleviate the need for them to engage in the complexities and constantly changing world of software development,” said Tim Whalen, Director of the Getty Conservation Institute in Los Angeles. In developing Arches, the GCI and WMF consulted international best practices and standards, engaging nearly 20 national, regional, and local government heritage authorities from the US, England, Belgium, France, and the Middle East, as well as information technology experts from the US and Europe. The contributions of English Heritage and the Flanders Heritage Agency have played a particularly important role during the development process. Data provided by English Heritage has been valuable for system development, and it is incorporated as a sample data set within the demonstration version of Arches.

"The careful integration of standards in Arches also will encourage the creation and management of data using best practices. This makes the exchange and comparison of data between Arches and other information systems easier, both within the heritage community and related fields, and it will ultimately support the longevity of important information related to cultural sites. Once the Arches system is installed, institutions implementing it can control the degree of visibility of their data. They may choose to have the system and its data totally open to online access, partially open, accessible with a log-in, not accessible at all, or somewhere in between" (http://artdaily.com/news/66701/Getty-and-World-Monuments-Fund-release-Arches-Software-to-help-safeguard-cultural-heritage-sites-#.UqM80_SIBcY, accessed 12-07-2013).

View Map + Bookmark Entry

Snowden Documents Show the NSA Tracking Cellphone Locations Worldwide December 4, 2013

According to top-secret documents provided by former NSA contractor Edward Snowden, and interviews with U.S. intelligence officials, the National Security Agency, Fort Meade, Maryland, gathers nearly 5 billion records a day on the locations of cellphones around the world, enabling the agency to track the movements of individuals, and map their relationships, in ways that would have been previously unimaginable.

"The records feed a vast database that stores information about the locations of at least hundreds of millions of devices, according to the officials and the documents, and new projects created to analyze that data have provided the intelligence community with what amounts to a mass surveillance tool. 

"The NSA does not target Americans’ location data by design, but the agency acquires a substantial amount of information on the whereabouts of domestic cellphones “incidentally,” a legal term that connotes a foreseeable but not deliberate result.

"One senior collection manager, speaking on the condition of anonymity but with permission from the NSA, said 'we are getting vast volumes' of location data from around the world by tapping into the cables that connect mobile networks globally and that serve U.S. cellphones as well as foreign ones. Additionally, data are often collected from the tens of millions of Americans who travel abroad with their cellphones every year.

"In scale, scope and potential impact on privacy, the efforts to collect and analyze location data may be unsurpassed among the NSA surveillance programs that have been disclosed since June. Analysts can find cellphones anywhere in the world, retrace their movements and expose hidden relationships among the people using them" (http://www.washingtonpost.com/world/national-security/nsa-tracking-cellphone-locations-worldwide-snowden-documents-show/2013/12/04/5492873a-5cf2-11e3-bc56-c6ca94801fac_story.html, accessed 12-06-2013).

The Washington Post provided an excellent information graphic on how NSA tracks people at this link.

View Map + Bookmark Entry

Software Turns a Smartphone into a 3D Scanner December 5, 2013

On December 5. 2013 scientists led by Marc Pollefeys, head of the Computer Vision and Geometry Group in the Institute of Visual Computing at ETH Zurich announced that they developed an app that turned an ordinary Android smartphone into a 3D scanner. Marc Pollefeys commented that two years ago software of this type would have been expected to run only on large computers. "That this works on a smartphone would have been unthinkable."

Rather than taking a regular photograph, a user moves the phone and its camera around the object being scanned, and after a few motions, a three dimensional model appears on the screen. As the user keeps moving the phone and its camera, additional images are recorded automatically, extending the wireframe of the virtual object. Because all calculations are programmed into the software, the user gets immediate feedback and can select additional viewpoints to cover missing parts of the rendering. The system utilizes the inertial sensors of the phone, extracting the camera views in real-time based on kinetic motion capture. The resulting 360 degree model can be used for visualization or augmented reality applications, or rapid prototyping with CNC (Computer Numerical Control) machines and 3D printers.

Because the app worked even in low light conditions, such as in museums and churches, it was suggested that a visitor in a museum could scan a sculpture and consider it later at home or at work.

In December 2013 a YouTube video showing how the 3D scanning app worked as well as examples of 3D printed objects made from cell phone scans were available at this link.

View Map + Bookmark Entry

U.S. and British Spies Infiltrated "World of Warcraft" and "Second Life" December 9, 2013

According to classified documents disclosed by former National Security Agency contractor Edward J. Snowden, American and British spies have infiltrated the fantasy worlds of World of Warcraft and Second Life, conducting surveillance and scooping up data in the online games played by millions of people across the globe. 

"Fearing that terrorist or criminal networks could use the games to communicate secretly, move money or plot attacks, the documents show, intelligence operatives have entered terrain populated by digital avatars that include elves, gnomes and supermodels.

"The spies have created make-believe characters to snoop and to try to recruit informers, while also collecting data and contents of communications between players.... Because militants often rely on features common to video games — fake identities, voice and text chats, a way to conduct financial transactions — American and British intelligence agencies worried that they might be operating there, according to the papers.

"Online games might seem innocuous, a top-secret 2008 N.S.A. document warned, but they had the potential to be a “target-rich communication network” allowing intelligence suspects “a way to hide in plain sight.” Virtual games “are an opportunity!” another 2008 N.S.A. document declared. (nytimes.com,12-09-2013,"Spies' Dragnet Reaches a Playing Field of Elves and Troves). 

♦ On December 9, 2013 The New York Times made original documents referred to in the above-mentioned story available at this link

View Map + Bookmark Entry

The National Library of Turkey Sells 147 Tons of Mostly Antiquarian Books to a "Junk Company" December 9, 2013

On December 11, 2013 the following message was forwarded to the exlibris-l newsgroup, to which I subscribe:

Date: December 9, 2013 at 2:03:35 PM EST
To: MELANET <melanet-l@googlegroups.com<mailto:melanet-l@googlegroups.com>>Subject: [MELANET-L] Turkish National Library sold books to a junk company

Dear Mela-netters,

For those of you fluent in Turkish, hereunder an article concerning the latest scandal at the Turkish National Library.


The library sold 147 tons of books to a junk company at a price of 7 to 25 cents a kg. Most of them were antiquarian titles and serial in Armenian, Greek and Karamanlica (Turkish using Greek alphabet). The reason is that the TNL does not have staff who can read Armenian, Greek, Hebrew, Judeospanish or Assyrian.

Rifat Bali

View Map + Bookmark Entry

The NSA Uses "Cookies" to Pinpoint Targets for Hacking December 10, 2013

On December 10, 2013 The Washington Post reported that slides from an April 2013 National Security Agency (NSA) presentation disclosed by former NSA contractor Edward Snowden showed that the NSA was secretly

"piggybacking on the tools that enable Internet advertisers to track consumers, using 'cookies' and location data to pinpoint targets for government hacking and to bolster surveillance.

"The agency's internal presentation slides, provided by former NSA contractor Edward Snowden, show that when companies follow consumers on the Internet to better serve them advertising, the technique opens the door for similar tracking by the government. The slides also suggest that the agency is using these tracking techniques to help identify targets for offensive hacking operations.

"For years, privacy advocates have raised concerns about the use of commercial tracking tools to identify and target consumers with advertisements. The online ad industry has said its practices are innocuous and benefit consumers by serving them ads that are more likely to be of interest to them.

"The revelation that the NSA is piggybacking on these commercial technologies could shift that debate, handing privacy advocates a new argument for reining in commercial surveillance."

View Map + Bookmark Entry

eCommerce Accounts for Only About 6% of Commerce in the U.S. December 20, 2013

"And yet online commerce currently accounts for only about 6 percent of all commerce in the United States. We still buy more than 90 percent of everything we purchase offline, often by handing over money or swiping a credit card in exchange for the goods we want. But the proliferation of smartphones and tablets has increasingly led to the use of digital technology to help us make those purchases, and it’s in that convergence that eBay sees its opportunity. As Donahoe puts it: ‘‘We view it actually as and. Not online, not offline: Both.’’ 

"Most people think of eBay as an online auction house, the world’s biggest garage sale, which it has been for most of its life. But since Donahoe took over in 2008, he has slowly moved the company beyond auctions, developing technology partnerships with big retailers like Home Depot, Macy’s, Toys ‘‘R’’ Us and Target and expanding eBay’s online marketplace to include reliable, returnable goods at fixed prices. (Auctions currently represent just 30 percent of the purchases made at eBay.com; the site sells 13,000 cars a week through its mobile app alone, many at fixed prices.)

"Under Donahoe, eBay has made 34 acquisitions over the last five years, most of them to provide the company and its retail partners with enhanced technology. EBay can help with the back end of websites, create interactive storefronts in real-world locations, streamline the electronic-payment process or help monitor inventory in real time. (Outsourcing some of the digital strategy and technological operations to eBay frees up companies to focus on what they presumably do best: Make and market their own products.) In select cities, eBay has also recently introduced eBay Now, an app that allows you to order goods from participating local vendors and have them delivered to your door in about an hour for a $5 fee. The company is betting its future on the idea that its interactive technology can turn shopping into a kind of entertainment, or at least make commerce something more than simply working through price-plus-shipping calculations. If eBay can get enough people into Dick’s Sporting Goods to try out a new set of golf clubs and then get them to buy those clubs in the store, instead of from Amazon, there’s a business model there. 

"A key element of eBay’s vision of the future is the digital wallet. On a basic level, having a ‘‘digital wallet’’ means paying with your phone, but it’s about a lot more than that; it’s as much a concept as a product. EBay bought PayPal in 2002, after PayPal established itself as a safe way to transfer money between people who didn’t know each other (thus facilitating eBay purchases). For the last several years, eBay has regarded digital payments through mobile devices as having the potential to change everything — to become, as David Marcus, PayPal’s president, puts it, ‘'Money 3.0'’' (http://www.nytimes.com/2013/12/22/magazine/ebays-strategy-for-taking-on-amazon.html?hp&_r=0, accessed 12-20-2013). 

View Map + Bookmark Entry

"TOP 10 3D Printing Stories of 2013" December 20, 2013

With the advent of comparatively inexpensive 3D printers intended for the consumer market, by the end of 2013 3D printing had become a widespread consumer and industrial phenomenon, applied to untold numbers of new products and art forms. On December 20, 2013 Designboom.com, based in Milan, Italy, published their illustraded list of TOP 10 3D printing stories of 2013

View Map + Bookmark Entry

"As New Services Track Habits, the E-Books are Reading You" December 24, 2013

On December 24, 2013 The New York Times published an article by David Streitfeld entitled, "As New Services Track Habits, the E-Books Are Reading You," from which I quote portions:

"Before the Internet, books were written — and published — blindly, hopefully. Sometimes they sold, usually they did not, but no one had a clue what readers did when they opened them up. Did they skip or skim? Slow down or speed up when the end was in sight? Linger over the sex scenes?

"A wave of start-ups is using technology to answer these questions — and help writers give readers more of what they want. The companies get reading data from subscribers who, for a flat monthly fee, buy access to an array of titles, which they can read on a variety of devices. The idea is to do for books what Netflix did for movies and Spotify for music." 

"Last week, Smashwords made a deal to put 225,000 books on Scribd, a digital library here that unveiled a reading subscription service in October. Many of Smashwords’ books are already on Oyster, a New York-based subscription start-up that also began in the fall.

"The move to exploit reading data is one aspect of how consumer analytics is making its way into every corner of the culture. Amazon and Barnes & Noble already collect vast amounts of information from their e-readers but keep it proprietary. Now the start-ups — which also include Entitle, a North Carolina-based company — are hoping to profit by telling all.

“ 'We’re going to be pretty open about sharing this data so people can use it to publish better books,' said Trip Adler, Scribd’s chief executive.

"Quinn Loftis, a writer of young adult paranormal romances who lives in western Arkansas, interacts extensively with her fans on Facebook, Pinterest, Twitter, Goodreads, YouTube, Flickr and her own website. These efforts at community, most of which did not exist a decade ago, have already given the 33-year-old a six-figure annual income. But having actual data about how her books are being read would take her market research to the ultimate level.

“ 'What writer would pass up the opportunity to peer into the reader’s mind?' she asked.

"Scribd is just beginning to analyze the data from its subscribers. Some general insights: The longer a mystery novel is, the more likely readers are to jump to the end to see who done it. People are more likely to finish biographies than business titles, but a chapter of a yoga book is all they need. They speed through romances faster than religious titles, and erotica fastest of all.

"At Oyster, a top book is 'What Women Want,' promoted as a work that 'brings you inside a woman’s head so you can learn how to blow her mind.' Everyone who starts it finishes it. On the other hand, Arthur M. Schlesinger Jr.’s 'The Cycles of American History' blows no minds: fewer than 1 percent of the readers who start it get to the end.

"Oyster data shows that readers are 25 percent more likely to finish books that are broken up into shorter chapters. That is an inevitable consequence of people reading in short sessions during the day on an iPhone."


"Here is how Scribd and Oyster work: Readers pay about $10 a month for a library of about 100,000 books from traditional presses. They can read as many books as they want.

“ 'We love big readers,' said Eric Stromberg, Oyster’s chief executive. But Oyster, whose management includes two ex-Google engineers, cannot afford too many of them.... Only 2 percent of Scribd’s subscribers read more than 10 books a month, he said.


"These start-ups are being forced to define something that only academic theoreticians and high school English teachers used to wonder about: How much reading does it take to read a book? Because that is when the publisher, and the writer, get paid.

"The companies declined to outline their business model, but publishers said Scribd and Oyster offered slightly different deals. On Oyster, once a person reads more than 10 percent of the book, it is officially considered 'read.' Oyster then has to pay the publisher a standard wholesale fee. With Scribd, it is more complicated. If the reader reads more than 10 percent but less than 50 percent, it counts for a tenth of a sale. Above 50 percent, it is a full sale."

View Map + Bookmark Entry

"The Internet Archive Console Living Room" Hosts Video Games from the '70s & '80s December 27, 2013

In December 2013 the Internet Archive opened "The Internet Archive Console Living Room," making available a very wide selection of computer video games from the 1970s and 1980s that were originally made for the Atari 2600, the Atari 7800 ProsSystem, The ColecoVision, The Magnavox Odyssey (known as the Philips Videopac G7000 in Europe), and the Astrocade. When the "Living Room" was opened the games did not feature sound, though that was expected to made available "shortly." The free service allowed the games to be downloaded or run through an in-browser emulation of the programs.


View Map + Bookmark Entry

The Growing Economic and Social Impact of Artificial Intelligence December 29, 2013

On December 29, 2013 The New York Times published an article by Michael Fitzpatrick on Japan's Todai Robot Project entitled "Computers Jump to the Head of the Class." This was the first article that I ever read that spelled out the potential dystopian impact of advances in artificial intelligence on traditional employment and also on education. Because the article was relatively brief I decided to quote it in full:

"TOKYO — If a computer could ace the entrance exam for a top university, what would that mean for mere mortals with average intellects? This is a question that has bothered Noriko Arai, a mathematics professor, ever since the notion entered her head three years ago.

“I wanted to get a clear image of how many of our intellectual activities will be replaced by machines. That is why I started the project: Can a Computer Enter Tokyo University? — the Todai Robot Project,” she said in a recent interview.

Tokyo University, known as Todai, is Japan’s best. Its exacting entry test requires years of cramming to pass and can defeat even the most erudite. Most current computers, trained in data crunching, fail to understand its natural language tasks altogether.

Ms. Arai has set researchers at Japan’s National Institute of Informatics, where she works, the task of developing a machine that can jump the lofty Todai bar by 2021.

If they succeed, she said, such a machine should be capable, with appropriate programming, of doing many — perhaps most — jobs now done by university graduates.

With the development of artificial intelligence, computers are starting to crack human skills like information summarization and language processing.

Given the exponential growth of computing power and advances in artificial intelligence, or A.I., programs, the Todai robot’s task, though daunting, is feasible, Ms. Arai says. So far her protégé, a desktop computer named Todai-kun, is excelling in math and history but needs more effort in reading comprehension.

There is a significant danger, Ms. Arai says, that the widespread adoption of artificial intelligence, if not well managed, could lead to a radical restructuring of economic activity and the job market, outpacing the ability of social and education systems to adjust.

Intelligent machines could be used to replace expensive human resources, potentially undermining the economic value of much vocational education, Ms. Arai said.

“Educational investment will not be attractive to those without unique skills,” she said. Graduates, she noted, need to earn a return on their investment in training: “But instead they will lose jobs, replaced by information simulation. They will stay uneducated.”

In such a scenario, high-salary jobs would remain for those equipped with problem-solving skills, she predicted. But many common tasks now done by college graduates might vanish.

“We do not know in which areas human beings outperform machines. That means we cannot prepare for the changes,” she said. “Even during the industrial revolution change was a lot slower.”

Over the next 10 to 20 years, “10 percent to 20 percent pushed out of work by A.I. will be a catastrophe,” she says. “I can’t begin to think what 50 percent would mean — way beyond a catastrophe and such numbers can’t be ruled out if A.I. performs well in the future.”

She is not alone in such an assessment. A recent study published by the Program on the Impacts of Future Technology, at Oxford University’s Oxford Martin School, predicted that nearly half of all jobs in the United States could be replaced by computers over the next two decades.

Some researchers disagree. Kazumasa Oguro, professor of economics at Hosei University in Tokyo, argues that smart machines should increase employment. “Most economists believe in the principle of comparative advantage,” he said. “Smart machines would help create 20 percent new white-collar jobs because they expand the economy. That’s comparative advantage.”

Others are less sanguine. Noriyuki Yanagawa, professor of economics at Tokyo University, says that Japan, with its large service sector, is particularly vulnerable.

“A.I. will change the labor demand drastically and quickly,” he said. “For many workers, adjusting to the drastic change will be extremely difficult.”

Smart machines will give companies “the opportunity to automate many tasks, redesign jobs, and do things never before possible even with the best human work forces,” according to a report this year by the business consulting firm McKinsey.

Advances in speech recognition, translation and pattern recognition threaten employment in the service sectors — call centers, marketing and sales — precisely the sectors that provide most jobs in developed economies. As if to confirm this shift from manpower to silicon power, corporate investment in the United States in equipment and software has never been higher, according to Andrew McAfee, the co-author of “Race Against the Machine” — a cautionary tale for the digitized economy.

Yet according to the technology market research firm Gartner, top business executives worldwide have not grasped the speed of digital change or its potential impact on the workplace. Gartner’s 2013 chief executive survey, published in April, found that 60 percent of executives surveyed dismissed as “‘futurist fantasy” the possibility that smart machines could displace many white-collar employees within 15 years.

“Most business and thought leaders underestimate the potential of smart machines to take over millions of middle-class jobs in the coming decades,” Kenneth Brant, research director at Gartner, told a conference in October: “Job destruction will happen at a faster pace, with machine-driven job elimination overwhelming the market’s ability to create valuable new ones.”

Optimists say this could lead to the ultimate elimination of work — an “Athens without the slaves” — and a possible boom for less vocational-style education. Mr. Brant’s hope is that such disruption might lead to a system where individuals are paid a citizen stipend and be free for education and self-realization.

“This optimistic scenario I call Homo Ludens, or ‘Man, the Player,’ because maybe we will not be the smartest thing on the planet after all,” he said. “Maybe our destiny is to create the smartest thing on the planet and use it to follow a course of self-actualization.”

View Map + Bookmark Entry

The NY Times.com Introduces its "Watching" Feature, a Roll of Developing Stories from Wire Services & Other Newspapers on its Home Page 2014

In 2014 The New York Times introduced its Watching feature, a continuous roll of developing stories from wire services and other newspapers, as well as its own newsroom, on the home page of its website.

This was still another example of the merging of reporting styles of different media on the Internet as prior to features like "Watching" was pioneered by television stations CNN, which introduced following developing stories as events unfolded in the 24-hour news cycle.

View Map + Bookmark Entry

Animating Classical Representational Paintings January 2014

In January 2014 Italian experimental animator and director Rino Stefano Tagliafierro of Milan issued B E A U T Y, a nearly 10 minute video animating, with extraordinary realism and sensitivity, numerous classical representational paintings, especially a series by Bourguereau and Caravaggio, but also a few by Rubens and Rembrandt and other artists. The dramatic music and sound Design was by Enrico Ascoli; "historiographer" was Giuliano Corti.

View Map + Bookmark Entry

Destruction of Canadian Environmental Libraries January 2014

On January 4, 2014 Boing Boing blogger Cory Doctorow reported:

"Canadian libricide: Tories torch and dump centuries of priceless, irreplaceable environmental archives

"Back in 2012, when Canada's Harper government announced that it would close down national archive sites around the country, they promised that anything that was discarded or sold would be digitized first. But only an insignificant fraction of the archives got scanned, and much of it was simply sent to landfill or burned.

"Unsurprisingly, given the Canadian Conservatives' war on the environment, the worst-faring archives were those that related to climate research. The legendary environmental research resources of the St. Andrews Biological Station in St. Andrews, New Brunswick are gone. The Freshwater Institute library in Winnipeg and the Northwest Atlantic Fisheries Centre in St. John's, Newfoundland: gone. Both collections were world-class.

"An irreplaceable, 50-volume collection of logs from HMS Challenger's 19th century expedition went to the landfill, taking with them the crucial observations of marine life, fish stocks and fisheries of the age. Update: a copy of these logs survives overseas.

"The destruction of these publicly owned collections was undertaken in haste. No records were kept of what was thrown away, what was sold, and what was simply lost. Some of the books were burned.

For further information see "What's Driving Chaotic Dismantling of Canada's Science Libraries? "by Andrew Nikiforuk, The Tyee, December 23, 2013.

View Map + Bookmark Entry

A Neural Network that Reads Millions of Street Numbers January 1, 2014

To read millions of street numbers on buildings photographed for Google StreetView, Google built a neural network that developed reading accuracy comparable to humans assigned to the task. The company uses the images to read house numbers and match them to their geolocation, storing the geolocation of each building in its database. Having the street numbers matched to physical location on a map is always useful, but it is particularly useful in places where street numbers are otherwise unavailable, or in places such as Japan and South Korea, where streets are rarely numbered in chronological order, but in other ways, such as the order in which they were constructed— a system that makes many buildings impossibly hard to find, even for locals.

"Recognizing arbitrary multi-character text in unconstrained natural photographs is a hard problem. In this paper, we address an equally hard sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from Street View imagery. Traditional approaches to solve this problem typically separate out the localization, segmentation, and recognition steps. In this paper we propose a unified approach that integrates these three steps via the use of a deep convolutional neural network that operates directly on the image pixels. We employ the DistBelief implementation of deep neural networks in order to train large, distributed neural networks on high quality images. We find that the performance of this approach increases with the depth of the convolutional network, with the best performance occurring in the deepest architecture we trained, with eleven hidden layers. We evaluate this approach on the publicly available SVHN dataset and achieve over 96% accuracy in recognizing complete street numbers. We show that on a per-digit recognition task, we improve upon the state-of-the-art and achieve 97.84% accuracy. We also evaluate this approach on an even more challenging dataset generated from Street View imagery containing several tens of millions of street number annotations and achieve over 90% accuracy. Our evaluations further indicate that at specific operating thresholds, the performance of the proposed system is comparable to that of human operators. To date, our system has helped us extract close to 100 million physical street numbers from Street View imagery worldwide."

Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, Vinay Shet, "Multi-digit Number Recognition from Street ViewImagery using Deep Convolutional Neural Networks," arXiv:1312.6082v2.

View Map + Bookmark Entry

The Burning of Two-Thirds of an Historic Antiquarian Bookshop in Tripoli, Lebanon January 4, 2014

On January 4, 2014 it was reported that most of an historic library in Tripoli, Lebanon were burned:

"Two-thirds of a historic collection of 80,000 books have gone up in smoke after a library was torched in the Lebanese city of Tripoli amid sectarian tensions. The blaze was started after a pamphlet insulting Islam was reportedly found inside a book.

"Firefighters struggled to subdue the flames as the decades-old Al-Saeh library went up in smoke on Friday in the Serail neighborhood of Tripoli. Despite firefighters’ best efforts, little of the trove of historic books and manuscripts was recovered from the wreckage.

"A demonstration had been planned in Tripoli after the pamphlet was found but was reportedly called off after the library’s Greek Orthodox owner spoke with Muslim leaders. Lebanese news outlet Naharnet also reported that one of the library workers was shot and wounded Thursday night.

“The library owner, Father Ebrahim Surouj, met with Islamic leaders in Tripoli. It became clear the priest had nothing to do with the pamphlet, and a demonstration that had been planned in protest over the incident was called off,” the source said.

"However, Ashraf Rifi, former head of the Internal Security Forces, told AP the attack had nothing to do with a pamphlet and was, in fact, triggered by speculation that Father Surouj had written a study on the internet that insulted Islam.

" 'This criminal act poses several questions [about] the party behind it that aims at damaging coexistence in the city and ruining its reputation,' Rifi told AFP. The Lebanese police have launched an investigation into the incident.

"Sectarian tensions have been rising in Lebanon recently as a result of the ongoing, two-year conflict in neighboring Syria. Until recently, the violence usually spared Christian minority groups. In December the northerly city of Tripoli saw a spate of attacks on the Alawite community in the latest spillover from Syria’s civil war" (extracts from http://rt.com/news/library-fire-lebanon-violence-176/, accessed 01-04-2014).

On January 5, 2013 more images of the library, the relevant documents and interviews wee available at this link, and at this link, and at this link.

Then, on January 16, 2014, Elias Muhanna, assistant professor of comparative literature at Brown University, posted on newyorker.com a piece entitled "Letter from Lebanon: A Bookshop Burns," indicating that the so-called library was actually an antiquarian bookshop—a cultural landmark in the community.

View Map + Bookmark Entry

Trends in Reading, Book Reviewing, Publishing and Writing as of the Beginning of 2014 January 4, 2014

On January 4, 2014 The New York Times published an opinion piece entitled "The Loneliness of the Long-Distance Reader, by Colin Robinson, a veteran of traditional publishing who in 2009 founded a "digital upstart" company to attempt to adapt to the radical changes that occurred as a result of digital books and the Internet. This I quote in full:

“TO read a novel is a difficult and complex art,” Virginia Woolf wrote in a 1925 essay, “How to Read a Book.” Today, with our powers of concentration atrophied by the staccato communication of the Internet and attention easily diverted to addictive entertainment on our phones and tablets, book-length reading is harder still.

"It’s not just more difficult to find the time and focus that a book demands. Longstanding allies of the reader, professionals who have traditionally provided guidance for those picking up a book, are disappearing fast. The broad, inclusive conversation around interesting titles that such experts helped facilitate is likewise dissipating. Reading, always a solitary affair, is increasingly a lonely one.

"A range of related factors have brought this to a head. Start with the publishing companies: Overall book sales have been anemic in recent years, declining 6 percent in the first half of 2013 alone. But the profits of publishers have remained largely intact; in the same period only one of what were then still the “big six” trade houses reported a decline on its bottom line. This is partly because of the higher margins on e-books. But it has also been achieved by publishers cutting costs, especially for mid-list titles.

"The “mid-list” in trade publishing parlance is a bit like the middle class in American politics: Anything below it is rarely mentioned in polite company. It comprises pretty much all new titles that are not potential blockbusters. But it’s the space where interesting things happen in the book world, where the obscure or the offbeat can spring to prominence, where new writers can make their mark.

"Budgets have been trimmed in various ways: Author advances, except for the biggest names, have slumped sharply since the 2008 financial crash, declining by more than half, according to one recent survey. It’s hard to imagine that the quality of manuscripts from writers who have been forced either to eat less or write faster isn’t deteriorating. Meanwhile, spending on editing and promotion has also been pared away.

"Things don’t get better after the book leaves the publisher. Price cutting, led primarily by Amazon, has reduced many brick-and-mortar bookstores to rubble, depriving readers of direct interaction with booksellers. Despite some recent good news, the number of independents has been halved in the last two decades, and the chain stores that surviveincreasingly employ part-time, unskilled staff.

"The decline in libraries weakens another vital prop for readers. Librarians, described by the novelist Richard Powers as “gas attendant[s] of the mind,” saw a national decrease in their numbers of nearly 100,000 over the two decades to 2009. Two-thirds of public librariesreported flat or decreasing budgets in 2012.

"Then there is today’s increasingly rare bird, the professional book reviewer. Stand-alone book sections in major newspapers such as The Los Angeles Times and The Washington Post have disappeared. The New York Times Book Review is now just a third of the length it was in its 1970s heyday. Online reviews like The Los Angeles Review of Books and The New Inquiry are striving to fill the gap. But their rates of pay, predictably, do not rival those at the more established publications they are replacing. Cyril Connolly caustically described the book reviewer as having “a whole-time job with a half-time salary,” a job “in which the best in him is generally expended on the mediocre in others.” Today, it’s more of a part-time job with no salary.

"This variety of channels for the expert appraisal of books has been replaced with recommendations thrown up by online retailers’ computers. But as with so much of the Internet, the nuance and enthusiasm of human encounters is poorly replicated by an algorithm. For more personal interactions, many have turned to social reading sites such as Goodreads or LibraryThing.

"The growth of these sites has been phenomenal. Shortly after its purchase by Amazon in the spring of last year, Goodreads announced it had 20 million users. Whether this is an amelioration or a reflection of an increasingly atomized culture is a question that can be filed in the same drawer as Facebook friending or dating on Match.com. Certainly the range of collective knowledge in pools of this size is incontestable. But it derives from self-selecting volunteers whose authority is hard to gauge. And though the overall network is vast, recommendations are generally exchanged within tight circles of friends. This results in another typical Internet characteristic: the “mirroring” of existing tastes at the expense of discovering anything new.

"THERE are many who will not mourn the displacement of literary culture’s traditional elite, dominated as it was by white, middle-aged men of comfortable means and conservative taste. Jeff Bezos, the C.E.O. of Amazon, aimed to exploit such disillusion with the old ways when announcing the launch of Kindle Direct. The self-publishing e-book program would, he claimed, produce “a more diverse book culture” with “no expert gatekeepers saying ‘sorry, that will never work.’ ” But to express discomfort at the attrition of expert opinion is not to defend the previous order’s prerogatives. Nor is it elitist to suggest that making the values and personnel of such professional hierarchies more representative is preferable to dispensing with them

"On the desolate beach that is the lot of the contemporary book reader, the footprints of one companion can still be found. They belong to the writer, who needs the reader not just to pay her or his wages but also to give meaning to their words. As John Cheever put it: “I can’t write without a reader. It’s precisely like a kiss — you can’t do it alone.”

"The troubling thought occurs, however, that this last remaining cohabitant may also be about to depart the island. With falling advances, writing is evermore dominated by people who don’t need it to earn a living: Tenured academics and celebrities spring to mind. For these groups, burnishing a résumé or marketing a brand is often as important as satisfying the reader.

"And then there are the hobbyists, those for whom writing is primarily an act of self-expression. This past November, National Novel Writing Month (Nanowrimo) encouraged more than 300,000 participants to produce a novel in 30 days. It would be churlish to gainsay the right of the legions taking up “noveling,” as Nanowrimo describes it, to exercise their creative selves. But such endeavors are not much helping readers. Indeed, to the extent that they expand the mind-boggling proliferation of new titles being published (more than 300,000 in 2012), they are adding to the problem.

"Faced with a dizzying array of choices and receiving little by way of expert help in making selections, book buyers today are deciding to play it safe, opting to join either the ever-larger audiences for blockbusters or the minuscule readerships of a vast range of specialist titles. In this bifurcation, the mid-list, publishing’s experimental laboratory, is being abandoned."

View Map + Bookmark Entry

My First Purchase of a Hardcover Book for One Cent on the Internet January 21, 2014

In January 2014 I finally succumbed to temptation, and curiosity, and purchased a hardcover book for one cent plus $3.99 postage from a bookseller in Toledo, Ohio through Amazon.com. Prior to this I had assumed that anything sold for such a low price had to be junk. However, the bookseller said the copy was in good condition and I decided to go for it.

The book arrived by media mail on January 21. So what did I get for this extremely low price? My order was for Cyberspace: First Steps, edited by Michael Benedikt and published by MIT Press in 1991. What I received was a hardcover book in an intact dust jacket. It shows signs of wear, but is clean internally and I would call it a good copy. 

When I placed my order for the one cent copy I thought there was a certain irony in a bargain purchase of a book on the theoretical and conceptual issues involved in the design, use, and effect of virtual environments, since before the Internet no bookseller would have sold a book of this kind for one cent. My theory is after one online bookseller listed the book for one cent others joined in to meet the competition. There were several copies of this book listed for a penny; others were as expensive as $20 or more. My guess is that there is not that much difference between my one cent copy and some of the more expensive ones. It is a question of guessing what the right price is. Just as I previously hesitated to order such a bargain, not all buyers will trust the quality of a one cent purchase.

The business model for selling books for a penny on the Internet presumably means making a profit on the postage. The assumption is that these dealers get their books for nothing and may earn a dollar or two on the shipping. Amazon, of course, takes a small commission. The winners here are the consumer and Amazon, and maybe the postal service. Selling books for a penny does not seem like the best business plan to me.

Will I buy more books for a penny after this experience? Most certainly.

View Map + Bookmark Entry

The Ongoing Debate on the Future of Libraries at the University of California January 21, 2014

On January 21, 2014 California Magazine published by the University of California published an article entitled "Schism in the Stacks: Is the University Library As We Know It Destined for Extinction?" Excerpts appear below:

"The turmoil is over the biggest change in information systems since the invention of movable type some 500 years ago. On one side of the battle line are the steely advocates for embracing what they firmly believe is the future of Internet-based information centers. They unabashedly acknowledge that means retreating from the shelves of books that have been libraries for millennia. On the other side are the passionately aggrieved fighters for preserving the ancient legacy of books, which they assert have lost none of their power and purpose—and likely never will."

"The University of California library started in 1868 with 1,000 volumes. Today UC Berkeley’s libraries alone have more than than 11 million. (The University of California library system now has some 36 million volumes, exceeding the number in the Library of Congress.) This substantial investment is not just financial—it is understandably emotional. Consider a statement in the opening pages of the Report of the Commission on the Future of the UC Berkeley Library from last October: “There is simply no great University without a great library.”

"For as long as anyone can remember, such a declaration had gone unchallenged. But then came along people such as Michael Eisen, a sparky professor of molecular and cell biology.

“ 'Fifteen years from now you won’t need a library,' he says, his office cluttered with a 52-inch flat screen monitor, a collection of beer cans and a bike. The 46-year-old says 'I’m not sure we’ll even have a one' when he’s 60. And he says he won’t miss it.

"Eisen admits that in his graduate school days he enjoyed the bound science journals in the library. But now? 'I haven’t been to the library to get a research article in 15 years.' Most of what he wants he grabs online.

"As with many academics in the sciences, he is particularly torqued by the price of getting journal research. For example, the Brain Research journal has a subscription price of $19,952 a year. The Journal of Comparative Neurology is $35,489. His outrage at the cost of accessing such information led him to form the nonprofit Public Library of Science (PLOS), which has become the prime publisher of 'open access journals'—a cause he champions in the Winter 2013 issue of California magazine.

Those who advocate saving the central stacks, in his view, are guilty of the 'fetishism of print.'

Bearing in mind that the central image in the university’s seal is a book, one can stroll past the Campanile from Eisen’s modern Stanley Hall office, and in minutes be at the magnificent John Galen Howard edifice in the very heart of campus.“The University Library,” it declaims in chiseled Sierra granite above the portal. This is Doe Memorial Library, opened in 1911.

Reaching Margaretta Lovell’s office atop Doe requires taking the stairs past the part where they are well maintained, or taking the elevator that could be from the service entrance of a prewar garment factory. Either way, you’ll know you’re not at Stanford. The professor of art history is in an office that has fixtures from 1923, book cases from goodness knows when, and a view to the west that never gets old​.

“ 'I want to defend what is still useful,' she says.

In an internal memorandum she shared, Lovell noted 'no other resource on campus is so central to the daily conduct of teaching, learning and research by every member of the academic community; when the library falters, the institution as a whole falters.' She mourns the loss of funding in the library, especially the 25 percent reductions in staff since 2007. 'Librarians help scholars with knowledge they didn’t even know they needed,' she says. This has special application for Berkeley’s most important clientele: undergraduates, who, Lovell notes, really 'don’t know what they don’t know.'

Many others with uniquely nuanced positions surround these dueling camps. And because it is Berkeley, there is a mountain of scholarship supporting each.

One celebrity academic whose name always comes up is Robert Darnton. Now the Harvard librarian, he tries to chart a middle course, writing scathing articles in the New York Review of Books about Google’s ongoing attempt to digitize books. But he also presses the point that the future is digital and everyone should get on board. 'What could be more pragmatic than the designing of a system to link up millions of megabytes and deliver them to readers in the form of easily accessible texts?' Darton asks.

Another voice in the fray, writer Nicholson Baker, complains that libraries have systemically trashed America’s heritage by microfilming newspapers and sending the hard copies to recycling centers and garbage dumps. And he’s not just whining. Baker created the American Newspaper repository, now at Duke University. He does not contend that libraries serve people. He says they serve history. And he has taken particular aim at the San Francisco Public Library for its culling 200,000 books.

More relevant to the Berkeley ecosystem is Tom Leonard, a youthful 69 year old who used to teach the history of journalism and now is the university librarian.

Amid the turmoil about the future of the library, Leonard can be found in the middle. It does not feel like a demilitarized zone. He knows there are fears that the future will pass by Berkeley, and there are fears that the valuable past and present will be destroyed.

From his expansive ground floor office next to the Bancroft Library—which incidentally gets among the fewest visitors but, unlike some of the other lesser-used subject specialty libraries on campus, is not on anyone’s list for potential elimination—he gets a fine view of the Campanile. But he is quick to take an interested visitor on a march through the stacks.

“ 'There are more libraries than McDonalds in the United States,' he says. 'And I don’t mean libraries that are part of elementary schools. I mean libraries that have their own roofs.'

As an aside, he notes that one can’t get a job at a McDonalds without an online application. And one presumes many people who would apply for such jobs don’t have their own Chromebooks or iPads. Where would they go? To the library, of course."

“ 'There’s not the same demand for books wanted a generation back,' he says. But he also notes that there are up to 800,000 people a year visiting Doe and Moffitt—more than all Cal sports events."

View Map + Bookmark Entry

Filed under: Book History, Libraries

Car Bombing of Police Headquarters Damages Building Housing Egyptian National Library and Museum of Islamic Art January 24, 2014

On February 5, 2014 MadaMasr.com published an article by Elena Chardakliyska entitled "The past and future of the bomb-damaged manuscript museum," from which sections are quoted below, with the addition of a few explanatory links:

"Early on January 24, 2014, a bomb exploded in front of the Cairo Security Directorate, just across the street from a neo-Mamluk building which many know as the Museum of Islamic Art. The first images that came out from behind the makeshift security cordon showed almost complete destruction of the exhibition halls. What many people didn't realize then, and maybe still haven't, is that those pictures were not of the Museum of Islamic Art, but rather of the institution housed on the second floor of the same building Dar al-Kutub Bab al-Khalq, which showcased their world-famous collections of manuscripts, documents, coins, scientific instruments, and other precious objects.

"By the end of January 24, it had become clear that the Museum of Islamic Art had suffered no luckier fate and that its collection and exhibition halls had sustained considerable damage. Still, even 10 days after the bombing, few realize that two different institutions were hurt in the bombing and even fewer bother to distinguish between them. The better known of the two, the Museum of Islamic Art, has been used interchangeably to mean the museum itself, the building, and the two institutions housed in it. While this article does not downplay what has happened to the Museum of Islamic Art and the importance of its collection, it will focus on Dar al-Kutub Bab al-Khalq and the collection it proudly exhibited until 10 days ago. Understanding what was there and what was lost can help in repairing and reconstructing this special building that houses two of Egypt’s most important cultural institutions.

"What would grow to be the National Library of Egypt and Archives of Egypt, known today as Dar al-Kutub wa Watha’iq al Qawmiyya or simply Dar al-Kutub, was founded in 1870 as the Kutubkhana Khediwiyya, or "Khedival Library," by a decree of Khedive Ismail. It officially opened its doors to the public on September 24, 1870, from the first floor of the Mustafa Fadil Pasha Palace in the Darb al-Gamamiz area in Cairo.

"Just days before the blast, on January 19, 2014, a celebration was held at the museum commemorating the inclusion of part of its collection of splendid, monumental Mamluk Quran manuscripts in UNESCO's Memory of the World Register, whose aim is to gather and raise awareness about collections of global importance.

"In the decades that followed, the library gathered private collections of manuscripts and rare books, as well as Quran manuscripts and religious texts from mosques and religious educational institutions. It was a bold project to collect the rich written heritage of Egypt in one place for study and research. One of the driving forces behind it was the reformer Ali Pasha Mubarak (1823-1893), minister of Public Works and Education during the second half of the 19th century.  

"On January 1, 1899 Khedive Ismail Hilmi II laid the cornerstone for the building that was damaged on January 24, 2014, known today as Bab al Khalq, with the idea to give a proper home to two of the most progressive institutions at the time. From its inception, the neo-Mamluk building was to house the newly established Museum of Arab Antiquities on the ground floor (now the Museum of Islamic Art) and the Khedival Library on the two upper floors. This arrangement persists to today, even though the Museum of Islamic Art and Dar al-Kutub Bab al-Khalq now find themselves under the Ministries of Antiquities and Culture respectively."

"Just days before the blast, on January 19, 2014, a celebration was held at the museum commemorating the inclusion of part of its collection of splendid, monumental Mamluk Quran manuscripts in UNESCO's Memory of the World Register, whose aim is to gather and raise awareness about collections of global importance. Before the Mamluk Quran manuscripts, a selection of the museum’s documents and decrees was added to the register in 2005, followed by the collection of Persian illustrated manuscripts in 2007. There are not many libraries in the world that have three different collections prominently featured on Memory of the World.

"Fortunately for Dar al-Kutub Bab al-Khalq and for all of us, all the manuscripts have now been accounted for after the bombing and none of them have suffered irreparable damage. Most of them were actually fine, protected by the thick walls of the building and the reinforced glass of the exhibition cases. They were all evacuated to Dar al-Kutub’s Corniche premises to be reunited with the rest of its collection until reconstruction plans take shape. In the meantime, we're left with the visceral realization that these manuscripts could have been lost or destroyed in a second, manuscripts that, despite being on display in the center of Cairo and free to look at, not enough people knew about."

View Map + Bookmark Entry

"Sensory Fiction": A Kind of Virtual Reality E-Book Reading Experience January 29, 2014

On January 29, 2014 Theguardian.com reported that scientists Felix Heibeck, Alexis Hope and Julie Legault at MIT's Media Lab created a "wearable book" that used temperature and lighting to mimic the experiences of the book's protagonist in a kind of augmented reading, virtual or partial virtual, reality. The e-book senses the page the reader is on, and changes ambient lighting and vibrations to "match the mood." A series of straps form a vest which contains a "heartbeat and shiver simulator," a body compression system, temperature controls and sound. 

The researchers used as prototype James Tiptree Jr's Hugo award-winning novella The Girl Who Was Plugged In, in which the protagonist P Burke – who is deformed by pituitary dystrophy and herself experiences life through an avatar – feels "both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar."

" 'Changes in the protagonist's emotional or physical state trigger discrete feedback in the wearable [vest], whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localised temperature fluctuations,' say the academics.

 " 'Sensory fiction is about new ways of experiencing and creating stories,' they write. 'Traditionally, fiction creates and induces emotions and empathy through words and images. By using a combination of networked sensors and actuators, the sensory fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader's imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

" 'To explore this idea, we created a connected book and wearable [vest]. The 'augmented' book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist's physiological emotions' " (http://www.theguardian.com/books/2014/jan/28/sensory-fiction-mit-technology-wearable-fiction-books?commentpage=1, accessed 01-29-2014).

View Map + Bookmark Entry

The New York Times Hires a Chief Data Scientist January 31, 2014

On January 31, 2014 engineering.columbia.edu announced that Chris Wiggins, associate professor of applied mathematics at Columbia's Institute for Data Sciences and Engineering, a founding member of the University’s Center for Computational Biology and Bioinformatics (C2B2), and co-founder of hackNY, was appointed chief data scientist by The New York Times.

 “ 'The New York Times is creating a machine learning group to help learn from data about the content it produces and the way readers consume and navigate that content,' says Wiggins. 'As a highly trafficked site with a broad diversity of typical user patterns, the New York Times has a tremendous opportunity to listen to its readers at web scale.”

" 'Data science in general and machine learning in particular are becoming central to the way we understand our customers and improve our products,' adds Marc Frons, chief information officer of The New York Times. 'We're thrilled to have Chris leading that effort.'

"Wiggins, whose activities at Columbia range from bioinformatics to mentoring activities to keep students off “the street” (Wall) by helping them join New York City’s exploding tech startup community, focuses his research on applications of machine learning to real-world data.

“ 'The dominant challenges in science and in business are becoming more and more data science challenges,' Wiggins explains. 'Solving these problems and training the next generation of data scientists is at the heart of the mission of Columbia’s Institute for Data Sciences and Engineering.'

"In creating the Institute, the University is drawing upon its extraordinary strengths in interdisciplinary research: nine schools across Columbia are collaborating on a broad range of research projects. Wiggins and his colleagues at the Engineering School are integrating mathematical, statistical, and computer science advances with a broad range of fields: 'We’re enabling better health care, smarter cities, more secure communications, and developing the future of journalism and media.' " (http://engineering.columbia.edu/ny-times-taps-prof-wiggins-chief-data-scientist, accessed 02-15-2014).

View Map + Bookmark Entry

How Book Censorship Works in Jordan February 2014

In February 2014 the Jordanian website 7iber.org published an article, with graphics, on the mechanism of book censorship in Jordan. The article, in Arabic, could be roughly translated into English by Google Translate.

View Map + Bookmark Entry

Filed under: Censorship

Animatronic Book Hive Sculpture to Commemorate the 400th Anniversary of the Bristol Central Library February 2014

To commemorate the 400th anniversary of the Bristol Central Library, founded as the Old Library, Bristol, in 1613, the library had installed in its lobby a remarkable honeycomb book hive, in which hundreds of old books were turned into an animatronic honeycomb hive sculpture. This sculpture was explained and shown in the film below: 

View Map + Bookmark Entry

The KUKA KR AGILUS in a Table Tennis Match Against Timo Boll February 9 – March 10, 2014

On February 9, 2014 KUKA (Keller und Knappich Augsburg), an international manufacturer of industrial robots and solutions for factory automation based in Augsburg, Germany, uploaded a video to its official YouTube channel KukaRobotGroup, teasing the audience with their new robot, the KUKA KR AGILUS, which they characterized as the "Fastest Robot on Earth." The teaser video showed a trailer of KUKA's robot competing against the German table tennis star Timo Boll at a staged match in Sofia, Bulgaria. 

The full video was available on March 10, 2014. Because the video was not a real match, but a commercial with extensive computer graphic imagery (CGI), it received strong criticism from the table tennis community. However, the "match" undoubtedly achieved its purpose, as when I checked on YouTube in May 2014, the video had been viewed more than 5 million times.

The company also uploaded a video on the making of the "commercial" :

View Map + Bookmark Entry

"Cheap Words. Amazon is good for consumers but is it good for books?" February 17, 2014

On February 10, 2014 I read an article in NewYorker.com by George Packer entitled "Cheap Words. Amazon is good for consumers but is it good for books?" The article was dated February 17, 2014. In my opinion the whole article was very much worth reading, but since I could not quote all of it, selections are quoted below:

"The combination of ceaseless innovation and low-wage drudgery makes Amazon the epitome of a successful New Economy company. It’s hiring as fast as it can—nearly thirty thousand employees last year. But its brand of creative destruction might be killing more jobs than it makes. According to a recent study of U.S. Census data by the Institute for Local Self-Reliance, in Washington, brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.

"In the book industry, many of those formerly employed people staffed independent stores. Two decades ago, there were some four thousand in America, and many of them functioned as cultural centers where people browsed and exchanged ideas. Today, there are fewer than two thousand—although, with Borders dead and Barnes & Noble ailing, the indies are making a small comeback. Vivien Jennings, of Rainy Day Books, has been in business for thirty-eight years. “We know our customers, and the other independents are the same,” she said. “We know what they read better than any recommendation engine.”

After Amazon’s legal triumph, some publishing people were driven to the wild surmise that the company had colluded with the Justice Department, if not micromanaged the entire case. They grasped at the fact that Jamie Gorelick, a deputy attorney general in the Clinton Administration, and a friend of Attorney General Eric Holder, serves on Amazon’s board, and that three weeks after Judge Cote’s decision President Barack Obama appeared at an Amazon warehouse in Chattanooga—where workers earn, on average, eleven dollars an hour—to praise the company’s creation of good jobs. The coup de grâce came last November, when the cash-strapped U.S. Postal Service announced a special partnership to deliver Amazon—and only Amazon—packages on Sundays, with the terms kept under official seal. To some people in the book world, Obama’s embrace of their nemesis felt like a betrayal. One literary agent said, “It’s strange that a President who’s an author, and whose primary income has come from being an author, was siding with a monopoly that wants to undercut publishers.”

Since the arrival of the Kindle, the tension between Amazon and the publishers has become an open battle. The conflict reflects not only business antagonism amid technological change but a division between the two coasts, with different cultural styles and a philosophical disagreement about what techies call “disruption.”

“Book publishing always has a rhetoric of the fallen age,” a senior editor at a major house told me. “It was always better before you got here. The tech guys—it’s always better if you just get out of my way and give me what I want. It’s always future-perfect.” He went on, “Their whole thing is ‘Let’s take somebody’s face and innovate on it. There’s an old lady—we don’t know we’re innovating unless she’s screaming.’ A lot of it is thoughtless innovation.” . . . .

"Book publishers’ dependence on Amazon, however unwilling, keeps growing. Amazon constitutes a third of one major house’s retail sales on a given week, with the growth chart pointing toward fifty per cent. By contrast, independents represent under ten per cent, and one New York editor said that only a third of the three thousand brick-and-mortar bookstores still in existence would remain financially healthy if publishers didn’t waive certain terms of payment. Jane Friedman, the former Random House and HarperCollins executive, who now runs a digital publisher called Open Road Integrated Media, told me, “If there wasn’t an Amazon today, there probably wouldn’t be a book business.” The senior editor who met Grandinetti said, “They’re our biggest customer, we want them to succeed. As I recover from being punched in the face by Amazon, I also worry: What if they are a bubble? What if the stock market suddenly says, ‘We want a profit’? You don’t want your father who abuses you physically to lose his job.”

"In 2009, after a career at publishers large and small, Robinson was laid off by Scribner, amid downsizing. Faced with his own professional extinction, and perhaps the industry’s, he co-founded a new company, OR Books, with a different business model. Robinson did research and found that fifty to sixty per cent of the list price of a book goes to Amazon or to another retailer. When he was starting out, in the eighties, that figure was more like thirty or forty per cent. A small-to-midsize publisher has to spend between ten and fifteen per cent on sales, warehousing, and shipping. This leaves little more than twenty-five per cent of the book’s price for editorial counsel, production costs, publicity, paying the author, and whatever profit might be left over. A shared sensibility for a certain kind of fiction or nonfiction writing unites everyone along the way: authors, agents, editors, designers, marketers, reviewers, readers. “The only point at which Bezos enters that chain is to take all the money and the e-mail address of the buyer,” Robinson said. “There’s an entire community of people, and Bezos stands in the middle of it and collects the money.”

"Instead of going through Amazon, OR Books sells directly to customers, using printers in Minnesota and the U.K. It pays about fifteen per cent to the printer and keeps the rest. “After four years, we’re just profitable,” Robinson told me. “It works.”

"To the Big Five, locked in a death struggle with Amazon and the distracted American reader, this kind of experimentation might seem unrealistic. To survive, they are trying to broaden their distribution channels, not narrow them. But Andrew Wylie thinks that it’s exactly what a giant like Penguin Random House should do. “If they did, in my opinion they would save the industry. They’d lose thirty per cent of their sales, but they would have an additional thirty per cent for every copy they sold, because they’d be selling directly to consumers. The industry thinks of itself as Proctor & Gamble. What gave publishers the idea that this was some big goddam business? It’s not—it’s a tiny little business, selling to a bunch of odd people who read."

View Map + Bookmark Entry

The First Project to Investigate the Use of Instagram During a Social Upheaval February 17 – February 22, 2014

On October 14, 2014 computer scientist and new media theorist Lev Manovich  of the The Graduate Center, City University of New York informed the Humanist Discussion Group of the project by his Software Studies Initiative entitled The Exceptional & The Everyday: 144 Hours in Kiev. This was the first project analyzing the use of Instagram images during a social upheaval using computational and data visualization techniques. The project explored 13,203 Instagram images shared by 6,165 people in the central area of Kiev, Ukraine during the 2014 Ukrainian revolution from February 17 to February 22, 2014. Collaborators on the project included Mehrdad Yazdani of the University of California, San Diego, Alise Tifentale, a PhD student in art history at The Graduate Center,City University of New York, and Jay Chow, a web developer in San Diego. The project seems to have been first publicized on the web by FastCompany and TheGuardian on October 8, 2014.


Visualizations and Analysis: Visualizing the images and data and interpreting the patterns. 

Context and Methods: Brief summary of the events in Kiev during February 17-22, 2014; our research methods. 

Iconography of the Revolution: What are the popular visual themes in Instagram images of a revolution? (essay by Alise Tifentale).

The Infra-ordinary City: Representing the ordinary from literature to social media (essay by Lev Manovich). 

The Essay: "Hashtag #Euromaidan: What Counts as Political Speech on Instagram?" (guest essay by Elizabeth Losh).

Constructing the dataset: Constructing the dataset for the project; data privacy issues.

References: Bibliography of relevant articles and projects.


Lev Manovich, Alise Tifentale, Mehrdad Yazdani, and Jay Chow. "The Exceptional and the Everyday: 144 Hours in Kiev." The 2nd Workshop on Big Humanities Data held in conjunction with IEEE Big Data 2014 Conference, forthcoming 2014.


The Exceptional and the Everyday: 144 hours in Kiev continues previous work of our lab (Software Studies Initiative,softwarestudies.com) with visual social media: phototrails.net (analysis and visualization of 2.3 Instagram photos in 14 global cities, 2013; selfiecity.net (comparison between 3200 selfie photos shared in six cities, 2014; collaboration with Moritz Stefaner). In the new project we specifically focus on the content of images, as opposed to only their visual characteristics. We use computational analysis to locate typical Instagram compositions and manual analysis to identify the iconography of a revolution. We also explore non-visual data that accompanies the images: most frequent tags, the use of English, Ukrainian and Russian languages, dates and times when images their shared, and their geo-coordinates." 

View Map + Bookmark Entry

Publishing the Wikipedia in 1000 Physical Volumes?? February 20, 2014

A scheme to publish an edition of the Wikipedia on paper—seemingly in an edition of a single set of 1000 volumes (presumably without an index)— was floated in February 2014. The purpose of such a set appeared to be as some kind of memorial to the mass of information created in the Wikipedia. I found the idea peculiarly retrograde, as the whole point of the Wikipedia is that it is electronically searchable, freely available to all, and in a perpetual state of growth, flux and improvement online. A set of 1000 volumes would not only be costly to print and bind, but physically unmanageable, and available to virtually no one. It would also be extremely difficult to use, if it is actually intended to be used. A decent index for such a monster might run to 50 or more volumes by itself, though no index appears to be planned. In any case, how would a sane person locate specific pages in a physical set containing 1,193,014 pages,—the number of pages estimated in the article below—if an index was actually available? Of course, such a physical edition might have made sense as a physical library resource before the Internet, but the whole point of the Wikipedia is that it is a product of the Internet, and could not have existed before the Internet.

From theguardian.com on February 20, 2014, I quote.

"It would run to over a million pages, featuring more than four million articles by 20 million volunteers: an "record-breaking" new project to turn Wikipedia into 1,000 books has just launched on Indiegogo

"Conceived by the team who work on the open source book tool for Wikipedia at publisher PediaPress, the Indiegogo fundraiser is looking to raise $50,000 (£30,000) to bring Wikipedia into print. "We all know that Wikipedia is huge. The English version alone consists of more than four million articles. But can you imagine how large Wikipedia really is? We think that the best way to experience the size of Wikipedia is by transforming it into the physical medium of books," they write. "'Containing the most volumes and edited by the largest number of contributors the printed edition will be a work of record-breaking dimensions. Furthermore the exhibit aims to honour the countless volunteers who have created this fascinating trove of knowledge in little more than 10 years.'

"The team, PediaPress's Heiko Hees, Christoph Kepper and Alex Boerger, believe the complete English Wikipedia would fit into approximately 1,000 books, with 1,200 pages each. "All volumes will have continuous page numbers, so the last article could as well be on page number 1,193,014,'they say. The text, which will include images, will be laid out in three columns across 600,000-odd sheets of paper, which will be "FSC-certified paper that comes from sustainable forestry', they said.

"They envisage the books fitting on a 10m-long book case, which they would hope to display at the Wikimania conference in London this August, alongside "live updates [printed] on continuous paper" to show the update frequency of the website, as 'obviously a printed Wikipedia will be outdated within seconds'.

"Then, if enough money is raised, they hope to send the exhibition on an international tour, before donating it to a large public library. 'To later generations this might be a period piece from the beginning of the digital revolution,' they say.

 " 'The most important reason for starting the project now was that we wanted to show the quantity of information,' said Kepper. "Over the last 10 to 15 years a glacial change has occurred and we have become so used to very large data sources on the internet that their true size seems surprising and unexpected. We are at the crossroads between the Gutenberg age and the information age. We all grew up with books and know this medium very well, but today we are dealing with so much information that books become less and less useful for us. For our kids, the situation might be totally different and the idea of printing something into a book might seem totally absurd.'

"So far, they have raised just over $2,000 of their $50,000 goal, but the project has 52 days yet to run, and while saying that there is "something strikingly counterintuitive about the whole project … and with the divide between print and digital widening, the value of such an exercise should be questioned" Wired magazine points out that 'Wikipedia does have a huge community that might see the project receive significant support'.

" 'The Wikipedia community is enormous and tight-knit, so don't be shocked if the campaign is successful eventually,' said Crowdfund Insider."

 "After the first eight days (16%) of our 60-day funding campaign, we reached only 4% of our funding goal," said Kepper. "Maybe we are already further into the information age than we thought, but we still feel pretty confident that our campaign will find enough supporters."

View Map + Bookmark Entry

Growth of Audiobooks Parallels Growth of eBooks February 22, 2014

On February 22, 2014 The New York Times published an op-ed piece by Stanford anthropologist T. M. Luhrmann entitled Audiobooks and the Return of Storytelling which suggested that the popularity of this form of reading is growing along with digital books. People are exploring different ways of reading. From it I quote:

"The sale of audiobooks has skyrocketed in recent years. In 2012, total industry sales in the book business fell just under 1 percent over all, but those of downloadable audiobooks rose by more than 20 percent. That year, 13,255 titles came out as audiobooks, compared with 4,602 in 2009. Publishers seem to be paying more attention to their production. When Simon and Schuster published Colm Toibin’s “Testament of Mary” last autumn, the narrator was Meryl Streep.

"We tend to regard reading with our eyes as more serious, more highbrow, than hearing a book read out loud. Listening to a written text harkens back to childhood, when we couldn’t read it ourselves, or a time when our parents left off reading the chapter out loud in the middle, a nudge that we’d use our school-taught skills to finish it off by ourselves.

"The great linguist Ferdinand de Saussure thought we treated writing as more important than speaking because writing is visual. Speech is ephemeral — you hear a word, and then it is gone. The word written down remains, and so we attach more significance to it. Saussure wrote that when we imagined text as more important than speech, it was as if we thought we would learn more about someone from his photograph than from his face.

"But so it is. The ability to read has always been invested with more importance than mere speech. When only a small priestly elite could read, books were sacred mysteries. When more people could read, literacy became a means to move forward in the world. These days, the ability to read is a prerequisite for full participation in the social order.

"But for most of human history literature has been spoken out loud. The Iliad and the Odyssey were sung. We think that the Homeric singers of those tales mastered the prodigious mnemonic task presented by those thousands upon thousands of lines of text through an intricate combination of common phrases — rosy-fingered dawn, the wine-dark sea — and nested plots that could be expanded or shortened as the occasion demanded.

"Even after narratives were written down, they were more often heard than read. The Roman elites could read, but gatherings at which people recited their poetry were common. And before the modern era, when printing made books widely available and literacy became widespread, reading was an oral act. People read aloud not only to others but also to themselves, and books, as the historian William Graham puts it in 'Beyond the Written Word,' were meant for the ears as much, or more so, than for the eyes.

"In the early 17th century the Jesuit missionary to China Matteo Ricci captured the orality of writing in this letter to a Peking publisher: 'The whole point of writing something down is that your voice will then carry for thousands of miles, whereas in direct conversation it fades at a hundred paces.' Mr. Graham writes that in Europe, silent private reading became widespread only in the second half of the 19th century."

This last assertion that "silent private reading became widespread only in the second half of the 19th century," did not strike me as correct, so I made a note to myself to verify or deny the assertion, someday. My sense was that silent reading was the method of choice since the Renaissance, but, I suppose, if we take into account the limited overall literacy during that time, and the dramatic growth of literacy that occurred in the second half of the 19th century, then Mr. Graham's assertion regarding "widespread" silent reading could be relatively correct. 

View Map + Bookmark Entry

Selfiecity.net. Analysis and Visualization of Thousands of Selfie Photos. . . . February 25, 2014

On February 25, 2014 I received this email from "new media" theorist Lev Manovich via the Humanist Discussion Group, announcing the launch of a cutting edge website analyzing the "Selfie" phenomenon: 

 "Date: Sat, 22 Feb 2014 21:00:30 +0000
        From: Lev Manovich <manovich@softwarestudies.com>
        Subject: Inntroducing selfiecity.net  - analysis and visualization of thousands of selfies photos from five global cities

"Welcome to Selfiecity!

I'm excited to announce the launch of our new research project selfiecity.net. The website presents analysis and interactive visualizations of 3,200 Instagram selfie photos, taken between December 4 and 12, 2013, in Bangkok, Berlin, Moscow, New York, and São Paulo.

The project explores how people represent themselves using mobile photography in social media by analyzing the subjects’ demographics, poses, and expressions.

Selfiecity (http://softwarestudies.us2.list-manage.com/track/click?u=67ffe3671ec85d3bb8a9319ca&id=edb72af8ec&e=8a08a35e11) investigates selfies using a mix of theoretic, artistic and quantitative methods:

* Rich media visualizations in the Imageplots section assemble thousands of photos to reveal interesting patterns.
* An interactive component of the website, a custom-made app Selfiexploratory invites visitors to filter and explore the photos themselves.
* Theory and Reflection section of the website contribute to the discussion of the findings of the research. The authors of the essays are art historians Alise Tifentale (The City University of New York, The Graduate Center) and Nadav Hochman (University of Pittsburgh) as well as media theorist Elizabeth Losh (University of California, San Diego).

The project is led by Dr. Lev Manovich, leading expert on digital art and culture; Professor of Computer Science, The Graduate Center, CUNY; Director, Software Studies Initiative."

Considering the phenomenon that selfies had become, I was not surprised when two days later reference was made, also via the Humanist Discussion Group, to  "a very active Facebook group https://www.facebook.com/groups/664091916962292/ 'The Selfies Research Network'." When I looked at this page in February 2014 the group had 298 members, mostly from academia, but also including professionals in fields like social media, from many different countries.

View Map + Bookmark Entry

"Unlooting the Iraq Museum": A Summary February 25, 2014

Regarding objects looted from Iraq's National Museum, on February 25, 2014 I received the foillowing email from Stewart Brand and the Longnow Foundation. via the SALT (Seminars About Long-term Thinking) mailing list  The email, titled "Unlooting the Iraq Museum," reported on a lecture previously given in San Francisco by Col. Matthew Bogdanso, who was instrumental in recovering many of the looted objects. Because the letter presented a meaningful concise summary of the context and extent of the looting and the extent of recovery, I decided to quote it in full: 
"Iraq’s National Museum in Baghdad had been closed to the public by Saddam Hussein for over two decades when his regime fell in April 2003.  Iraquis felt no connection to the world renowned cultural treasures inside.  Like every other government building, it was trashed and looted.
Marine Col. Matthew Bogdanos, then in Basra leading a counter-terrorism group, volunteered part of his team to attempt recovery of the lost artifacts.  He arrived at the museum with 14 people to protect its dozen buildings and 11 acres in a still-active battle zone.  Invited by the museum director, they took up residence and analyzed the place as a crime scene.
Missing were some of civilization’s most historic archeological treasures.  From 3200 BC, the Sacred Vase of Warka, the world’s oldest carved stone ritual vessel.  From 2600 BC, the solid gold bull’s head from the Golden Harp of Ur.  From 2250 BC, the copper Akkadian Bassetki Statue, the earliest known example of lost-wax casting.  From 3100 BC, the limestone Mask of Warka, the first naturalistic depiction of a human face.  From 800 BC, the Treasure of Nimrud— a fabulous hoard of hundreds of pieces of exquisite Assyrian gold jewelry and gems.  Plus thousands of other artifacts and antiquities, including Uruk inscribed cylinder seals from 2500 BC.
Bidding on the international antiquities black market went to $25,000 for Uruk cylinder seals, $40 million for the Vase of Varka.

Since the goal was recovery, not prosecution, Bogdanos instituted a total amnesty for return of stolen artifacts—no questions asked, and also no payment, just a cordial cup of tea for thanks.  Having learned from duty in Afghanistan to listen closely to the locals, Bogdanos and his team walked the streets, visited the mosques, played backgammon in the neighborhoods, and followed up on friendly tips (every one of which turned out to be genuine).  3,000 items had been taken from the museum by random looters.  Local Iraquis returned 95% of them.  
The prime pieces stolen by professional thieves took longer to track down.  Raids on smuggler’s trucks and hiding places turned up more items.  The Bassetki Statue was found hidden in a cess pool; the Mask of Warka had been buried in the ground.  Some pieces began turning up all over the world and were seized when identified. (Bogdanos noted that Geneva, Switzerland, is where that kind of contraband often rests in warehouses that law enforcement is not allowed to search.)
It turned out Saddam himself had looted the museum of the Treasure of Nimrud and the gold bull’s head back in 1990.  Tips led to a flooded underground vault in the bombed-out Central Bank of Iraq, and the priceless items were discovered.  
Everything found was returned to the Iraq National Museum, where the great antiquities are gradually being restored to public display.  Iraq, and the world, is retaking possession of its most ancient heritage.
Bogdanos quoted Sophocles: “Whoever neglects the arts… has lost the past and is dead to the future.”
—Stewart Brand (sb@longnow.org)
(This talk was neither recorded nor filmed, because material presented in it is part of a still on-going investigation.  You can get the full story from Bogdanos’ excellent book, Thieves of Baghdad.)"
View Map + Bookmark Entry

PHEME: A Social Media Lie Detector February 27, 2014

On February 27, 2014 the following post came across Willard McCarty's Humanist Discussion Group. With its reference to cutting edge social media research in the PHEME project founded in January 2014, combined with the literary quotation on gossip from the Roman poet Ovid's Metamorphosesthis was one of McCarty's characteristically wise posts. It is quoted in full:

Date: Thu, 27 Feb 2014 06:38:05 +0000
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: a social media lie detector?

Two researchers from the Institute of Psychiatry, King's College London, are part of an EU project, PHEME, which aims automatically to detect four types of online rumours (speculation, controversy, misinformation, and disinformation) and to model their spread. "With partners from seven different countries, the project will combine big data analytics with advanced linguistic and visual methods. The results will be suitable for direct application in medical information systems and digital journalism." I note in particular the qualifying statement that,

> However, it is particularly difficult to assess whether a piece of
> information falls into one of these categories in the context of
> social media. The quality of the information here is highly dependent
> on its social context and, up to now, it has proven very challenging
> to identify and interpret this context automatically.

Indeed. Ovid would, I think, be amused:

> tota fremit vocesque refert iteratque quod audit;
> nulla quies intus nullaque silentia parte,
> nec tamen est clamor, sed parvae murmura vocis,
> qualia de pelagi, siquis procul audiat, undis
> esse solent, qualemve sonum, cum Iuppiter atras
> increpuit nubes, extrema tonitrua reddunt.
> atria turba tenet: veniunt, leve vulgus, euntque
> mixtaque cum veris passim commenta vagantur
> milia rumorum confusaque verba volutant;
> e quibus hi vacuas inplent sermonibus aures,
> hi narrata ferunt alio, mensuraque ficti
> crescit, et auditis aliquid novus adicit auctor.
> illic Credulitas, illic temerarius Error
> vanaque Laetitia est consternatique Timores
> Seditioque recens dubioque auctore Susurri;
> ipsa, quid in caelo rerum pelagoque geratur
> et tellure, videt totumque inquirit in orbem.
> The whole place is full of noises, repeats all words and doubles what
> it hears. There is no quiet, no silence anywhere within. And yet
> there is no loud clamour, but only the subdued murmur of voices, like
> the murmur of the waves of the sea if you listen afar off, or like
> the last rumblings of thunder when Jove has made the dark clouds
> crash together. Crowds fill the hall, shifting throngs come and go,
> and everywhere wander thousands of rumours, falsehoods mingled with
> the truth, and confused reports flit about. Some of these fill their
> idle ears with talk, and others go and tell elsewhere what they have
> heard; while the story grows in size, and each new teller makes
> contribution to what he has heard. Here is Credulity, here is
> heedless Error, unfounded Joy and panic Fear; here sudden Sedition
> and unauthentic Whisperings. Rumour herself beholds all that is done
> in heaven, on sea and land, and searches throughout the world for
> news.

Ovid, Met. 12.47-63 (Loeb edn)

See http://www.pheme.eu/ for more."

View Map + Bookmark Entry

"The Web at 25 in the U.S." by the Pew Research Internet Project February 27, 2014

Coinciding with the 25th anniversary of Tim Berners-Lee's initial conception of the World Wide Web in March 1989, on February 27, 2014 the Pew Research Internet Project of Washington, D.C. released its report on the 25th anniversary of the World Wide Web: The Web at 25 in the U.S. 

Here are some of the general conclusions drawn in the report:

90 percent of Americans think that the Internet has been a good thing for them personally.

$75,000 is the Income level were Internet usage almost becomes ubiquitous. A full 99 percent of Americans who report this level of household income are on the Web.

28 percent of landline telephone owners would find it “very hard” to give up their phones. That is a big drop from 2006, when 48 percent of landline owners struggled with the idea of giving up their landline phones.

11 percent represents the gap between those who would find it “very hard” to give up the Internet (46 percent) and television (35 percent).

58 percent of Americans own a smartphone.

3-to-1: The ratio of Internet users who think that social media strengthens their relationships versus those who think it weakens them.

76 percent of Internet users say the people they witness or encounter online are “mostly kind” to each other.

View Map + Bookmark Entry

In a Turnabout, A Digital Publisher Plans to Put Newsweek Magazine Back in Print March 2, 2014

On March 2, 2014 Leslie Kaufman reported in The New York Times that IBT Media planned to republish Newsweek magazine in print once again, after the print version had failed several times. This time the purpose of the print publication was, it seems, to promote more exposure for the web version:

"Steven Cohn, editor in chief of Media Industry Newsletter, said Newsweek’s decision to print the magazine made good business sense.

“ 'The print magazine is kind of a prop to give the web better exposure,' Mr. Cohn said. 'For Newsweek, having a cover can have its advantages. You can appear on ‘Meet the Press.’ Celebrities and politicians like being on actual covers on the newsstand. They have stripped the costs way down. So really, what do they have to lose?'

This decision, of course, represented a contrarian view, made by publishers who were successful and profitable in the online world.

"Newsweek’s print ambitions are modest. It plans to print 70,000 copies — at its peak two decades ago, circulation was 3.3 million — and sell them for $7.99 each, with the magazine’s content also available online for a more affordable price.

“ 'You would pay only if you don’t want to read anything on a backlit screen,' Mr. Uzac said. 'It is a luxury product.' "

During the past twenty years the misfortunes of Newsweek paralleled the decline of print media and the advance of digital:

"The Graham family, longtime newspaper publishers, gave up and sold it for a dollar. The media mogul Barry Diller spent tens of millions trying to revive it, only to throw in the towel. Even Mr. Diller’s star editor, Tina Brown, could not stop it from going out of print.

"But where giants failed, IBT Media, a small digital publishing company, sees a growth path for Newsweek, the struggling newsweekly magazine it bought for a pittance last summer.

"Etienne Uzac, 30, and Johnathan Davis, 31, founders of IBT Media, believed they could recreate Newsweek as a vibrant and profitable web-only magazine. But now, having tripled Newsweek’s online traffic, they plan to punctuate the magazine’s comeback by turning on the printing presses again. Hard copies are expected to hit newsstands on Friday."

Not all experts thought this decision was wise:

"Yet Newsweek’s reappearance in print comes at a fraught moment for the industry. Time Inc. — the parent of Newsweek’s longtime archrival, Time magazine, as well as Sports Illustrated and Fortune — recently laid off roughly 500 people to cut costs.

"Across the industry, newsstand sales of consumer magazines fell 11 percent in the second half of 2013 from the period a year earlier. Paid subscriptions dropped 1.2 percent, according to the Alliance for Audited Media’s most recent data."

View Map + Bookmark Entry

UC Berkeley is the First American University to Hire a Wikipedian-in-Residence March 17, 2014

On March 17, 2014 the Contra Costa Times reported that UC Berkeley hired the first Wikipedian-in-residence:

"BERKELEY -- Citing Wikipedia in a research paper may still be a huge faux pas, but for a growing number of college students, the online encyclopedia is now the assignment.

"Enter 24-year-old Kevin Gorman, UC Berkeley's new Wikipedian-in-residence.

"In January, the campus hired the Wikipedia devotee (interests: wild mushrooms, women in philosophy) to coach students and advise professors on the deceptively complex task of editing articles for the user-generated encyclopedia that gets 500 million monthly visitors.

" 'The goal of cultural institutions is in large part to share knowledge, to make their information accessible to the general public,' Gorman says. 'I think it would be really, really cool to get that information online one way or another so its access will no longer be limited to people at Berkeley who have Berkeley credentials.'

"Rather than write term papers to be read by a professor and forgotten, students at UC Berkeley and elsewhere are being asked to make their mark on the site. More than 150 universities nationwide -- including the University of San Francisco and California Maritime Academy -- have classes producing content for the encyclopedia, according to Wiki Education, a foundation created in July to support such projects.

" 'Students are the fuel of Wikipedia,' said Frank Schulenburg, who directs the foundation.

"Cal is the first American university to create a position devoted to improving the site -- and getting its own rarefied scholarship out to the public. Some museums around the world have Wikipedians and Harvard's Houghton Museum last week advertised for one.

"Gorman has edited Wikipedia obsessively since his undergraduate days at Cal. But don't call him a 'WikiGnome,' as UC Berkeley did in a 2012 headline about the 6-foot-5 undergraduate geography major. 'I have no idea why someone chose to call me that a couple of years ago,' he said.

"Aside from interviews -- news about the position caught the attention of reporters in Germany and Spain, he said -- Gorman has spent his first weeks on the job training students, teaching assistants and professors how to produce and source Wiki articles, a more complicated task than it might seem.

"Changes need to be explained, and they often are discussed with other editors at length, in an article's 'talk' page. Subjective or weakly sourced entries may be deleted, something junior Katrina Anasco hopes doesn't happen to her group project on the Toxic Substances Control Act.

"I 't definitely opened my eyes to how much work it is to actually get an edit into a page,' said Anasco, a student in professor Dara O'Rourke's environmental justice class.

"Gorman and Schulenburg say college students bring needed racial and gender diversity to a site dominated by young white men, many of them computer programmers. While the site has more than 4.5 million entries, the information tends to be skewed to their topics and perspectives.

"Search for a battleship or sports car and the resulting article will likely be 'gorgeous' in its detail, Schulenburg said. But not a single female philosopher had been written about until a few years ago, Gorman added.

"Gorman has filled some of those gaps himself. And now, editing Wikipedia articles is part of the curricula in environmental justice and cultural studies courses taught by O'Rourke and Victoria Robinson."

View Map + Bookmark Entry

DeepFace, Facial Verification Software Developed at Facebook, Approaches Human Ability March 17, 2014

On March 17, 2014 MIT Technology Review published an article by Tim Simonite on Facebook's facial recognition software, DeepFace, which I quote:

"Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.

"That’s a significant advance over previous face-matching software, and it demonstrates the power of a new approach to artificial intelligence known as deep learning, which Facebook and its competitors have bet heavily on in the past year (see 'Deep Learning'). This area of AI involves software that uses networks of simulated neurons to learn to recognize patterns in large amounts of data.

"'You normally don’t see that sort of improvement,' says Yaniv Taigman, a member of Facebook’s AI team, a research group created last year to explore how deep learning might help the company (see 'Facebook Launches Advanced AI Effort'). 'We closely approach human performance,' says Taigman of the new software. He notes that the error rate has been reduced by more than a quarter relative to earlier software that can take on the same task.

"Facebook’s new software, known as DeepFace, performs what researchers call facial verification (it recognizes that two images show the same face), not facial recognition (putting a name to a face). But some of the underlying techniques could be applied to that problem, says Taigman, and might therefore improve Facebook’s accuracy at suggesting whom users should tag in a newly uploaded photo.

"However, DeepFace remains purely a research project for now. Facebook released a research paper on the project last week, and the researchers will present the work at the IEEE Conference on Computer Vision and Pattern Recognition in June. 'We are publishing our results to get feedback from the research community,' says Taigman, who developed DeepFace along with Facebook colleagues Ming Yang and Marc’Aurelio Ranzato and Tel Aviv University professor Lior Wolf.

"DeepFace processes images of faces in two steps. First it corrects the angle of a face so that the person in the picture faces forward, using a 3-D model of an 'average' forward-looking face. Then the deep learning comes in as a simulated neural network works out a numerical description of the reoriented face. If DeepFace comes up with similar enough descriptions from two different images, it decides they must show the same face.

"The performance of the final software was tested against a standard data set that researchers use to benchmark face-processing software, which has also been used to measure how humans fare at matching faces.

"Neeraj Kumar, a researcher at the University of Washington who has worked on face verification and recognition, says that Facebook’s results show how finding enough data to feed into a large neural network can allow for significant improvements in machine-learning software. 'I’d bet that a lot of the gain here comes from what deep learning generally provides: being able to leverage huge amounts of outside data in a much higher-capacity learning model,' he says.

"The deep-learning part of DeepFace consists of nine layers of simple simulated neurons, with more than 120 million connections between them. To train that network, Facebook’s researchers tapped a tiny slice of data from their company’s hoard of user images—four million photos of faces belonging to almost 4,000 people. 'Since they have access to lots of data of this form, they can successfully train a high-capacity model,' says Kumar.

View Map + Bookmark Entry

Facebook Acquires Oculus VR, Designer and Builder of Virtual Reality Headsets March 25, 2014

On March 25, 2014 Mark Zuckerberg, Founder of Facebook, announced in his blog:

"I'm excited to announce that we've agreed to acquire Oculus VR, the leader in virtual reality technology.

"Our mission is to make the world more open and connected. For the past few years, this has mostly meant building mobile apps that help you share with the people you care about. We have a lot more to do on mobile, but at this point we feel we're in a position where we can start focusing on what platforms will come next to enable even more useful, entertaining and personal experiences.

"This is where Oculus comes in. They build virtual reality technology, like the Oculus Rift headset. When you put it on, you enter a completely immersive computer-generated environment, like a game or a movie scene or a place far away. The incredible thing about the technology is that you feel like you're actually present in another place with other people. People who try it say it's different from anything they've ever experienced in their lives.

"Oculus's mission is to enable you to experience the impossible. Their technology opens up the possibility of completely new kinds of experiences.

"Immersive gaming will be the first, and Oculus already has big plans here that won't be changing and we hope to accelerate. The Rift is highly anticipated by the gaming community, and there's a lot of interest from developers in building for this platform. We're going to focus on helping Oculus build out their product and develop partnerships to support more games. Oculus will continue operating independently within Facebook to achieve this.

"But this is just the start. After games, we're going to make Oculus a platform for many other experiences. Imagine enjoying a court side seat at a game, studying in a classroom of students and teachers all over the world or consulting with a doctor face-to-face -- just by putting on goggles in your home.

"This is really a new communication platform. By feeling truly present, you can share unbounded spaces and experiences with the people in your life. Imagine sharing not just moments with your friends online, but entire experiences and adventures.

"These are just some of the potential uses. By working with developers and partners across the industry, together we can build many more. One day, we believe this kind of immersive, augmented reality will become a part of daily life for billions of people.

"Virtual reality was once the dream of science fiction. But the internet was also once a dream, and so were computers and smartphones. The future is coming and we have a chance to build it together. I can't wait to start working with the whole team at Oculus to bring this future to the world, and to unlock new worlds for all of us."

When I wrote this entry on April 6, 2014 Facebook's website stated that 195, 678 Facebook users "liked" Zuckerberg's announcement.

News media stated that Facebook paid $2,000,000,000 for Oculus VR, headquartered in Irvine, California. 

View Map + Bookmark Entry

Using Data-Mining of Location-Based Food and Drink Habits to Identify Cultural Boundaries April 2014

In April 2014 Thiago H Silva and colleagues, mainly from the Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, MG, Brazil, reported results of data-mining food and drink habits from the location-based social media site, Foursquare.
Prior to the application of data-mining to the problem, the World Values Survey, a global network of social scientists studying values and their impact on social and political life, conducted over 250,000 interviews in 87 societies between 1981 and 2008. Between 2010 and 2014 the World Values survey conducted 80,000 interviews. However, that traditional approach was very time-consuming and and expensive. 

Thiago H Silva, Pedro O S Vaz de Melo, Jussara Almeida, Mirco Musolesi, Antonio Loureiro, "You are What you Eat (and Drink): Identifying Cultural Boundaries by Analyzing Food & Drink Habits in Foursquare," http://arxiv.org/abs/1404.1009.
"Food and drink are two of the most basic needs of human beings. However, as society evolved, food and drink became also a strong cultural aspect, being able to describe strong differences among people. Traditional methods used to analyze cross-cultural differences are mainly based on surveys and, for this reason, they are very difficult to represent a significant statistical sample at a global scale. In this paper, we propose a new methodology to identify cultural boundaries and similarities across populations at different scales based on the analysis of Foursquare check-ins. This approach might be useful not only for economic purposes, but also to support existing and novel marketing and social applications. Our methodology consists of the following steps. First, we map food and drink related check-ins extracted from Foursquare into users' cultural preferences. Second, we identify particular individual preferences, such as the taste for a certain type of food or drink, e.g., pizza or sake, as well as temporal habits, such as the time and day of the week when an individual goes to a restaurant or a bar. Third, we show how to analyze this information to assess the cultural distance between two countries, cities or even areas of a city. Fourth, we apply a simple clustering technique, using this cultural distance measure, to draw cultural boundaries across countries, cities and regions."
View Map + Bookmark Entry

"Book Traces" a Crowd-Sourced Web Project April 2014

In April 2104 Andrew Stauffer of the University of Virgina launched Book Traces, a crowd-sourced web project to identify unique copies of nineteenth and early twentieth-century books on library shelves. Its goal was to identify annotations and other unique customizations made by original owners in personal copies, primarily in the form of marginalia and inserts.

"Sponsored by NINES at the University of Virginia and led by Andrew Stauffer, Book Traces is meant to engage the question of the future of the print record in the wake of wide-scale digitization. The issue is particularly urgent for the materials from the long nineteenth century.  In most cases, pre-1800 books have been moved to special collections, and post-1923 materials remain in copyright and thus on the shelves for circulation.  But college and university libraries are now increasingly reconfiguring access to public-domain texts via repositories such as Google Books.  We see a prevalent assumption in library policy circles that copies of any given nineteenth-century edition are identical.  As a result, we are now anticipating the withdrawal of large portions of nineteenth-century print collections in favor of digital surrogates.

"We believe that, as libraries begin to manage down their print holdings, special attention should be paid to these historical materials, which in many cases came to university libraries from alumni donors and bear marks of use by their original nineteenth-century owners.  These books thus constitute a massive, distributed archive of the history of reading, hidden in plain sight in the circulating collections.  Marginalia, inscriptions, photos, original manuscripts, letters, drawings, and many other unique pieces of historical data can be found in individual copies, many of them associated with the history of the institution that collected the books in the first place. These unique attributes cannot be located by any electronic catalog. Each book has to be open and examined.

"That’s where you come in. Thousands of such books — marked or otherwise customized by nineteenth-century owners – are on library shelves.  Via Book Traces, NINES hopes to help devise a triage process for discovering them, cataloguing them more fully, and making better-informed decisions about print collections management. Further, we hope to model this process as part of a national effort for coordinating the management of nineteenth-century printed materials in college and university libraries."

My attention to Book Traces was drawn by an article by Alexis C. Madrigal published in The Atlantic on May 7, 2014, and entitled, "What is a Book? Not just a bag of words but a thing held by human hands."

View Map + Bookmark Entry

Digital Humanities Quarterly to Publish Articles as Sets of Visualizations Rather than Articles in Verbal Form April 1, 2014

On April 1, 2014 I was surprised and intrigued to read this post on the Humanist Discussion Group, Vol. 27, No. 933 from Julia Flanders, Editor-in-Chief of Digital Humanities Quarterly, published at Brown University in Providence, R. I. :

"Subject: New publishing model for Digital Humanities Quarterly

"Dear all,

"DHQ is pleased to announce an experimental new publication initiative that may be of interest to members of the DH community. As of April 1, we will no longer publish scholarly articles in verbal form. Instead, articles will be processed through Voyant Tools and summarized as a set of visualizations which will be published as a surrogate for the article. The full text of the article will be archived and will be made available to researchers upon request, with a cooling-off period of 8 weeks. Working with a combination of word clouds, word frequency charts, topic modeling, and citation networks, readers will be able to gain an essential understanding of the content and significance of the article without having to read it in full. The results are now visible at DHQ’s site here:


"We’re excited about this initiative on several counts. First, it helps address a growing problem of inequity between scholars who have time to read and those whose jobs are more technical or managerial and don’t allow time to keep up with the growing literature in DH. By removing the full text of the article from view and providing a surrogate that can be easily scanned in a few minutes, we hope to rectify this imbalance, putting everyone on an equal footing. A second, related problem has to do with the radical insufficiency of reading cycles compared with the demand for reading and citation to drive journal impact factor. To the extent that readers are tempted to devote significant time to individual articles, they thereby neglect other (possibly equally deserving) articles and the rewards of scholarly attention are distributed unevenly, based on arbitrary factors such as position within the journal’s table of contents. DHQ’s reading interface will resort articles randomly at each new page view, and will display each article to a given reader for no more than 5 minutes, enforcing a more equitable distribution of scarce attention cycles.

"This initiative also addresses a deeper problem. At DHQ we no longer feel it is ethical to publish long-form articles under the pretense that anyone actually reads them. At the same time, it is clear that scholars feel a deep, almost primitive need to write in these modes and require a healthy outlet for these urges. As an online journal, we don’t face any physical restrictions that would normally limit articles to a manageable size, and informal attempts to meter authors by the word (for instance, by making words over a strict count limit only intermittently visible, or blocking them with advertising) have proven ineffectual. Despite hopes that Twitter and other short-form media would diminish the popularity of long-form sustained arguments, submissions of long-form articles remain at high levels. We hope that this new approach will balance the needs of both authors and readers, and create a more healthy environment for scholarship.

"Thanks for your support of DHQ and happy April 1!

"best wishes, Julia."

As far as I could tell on April 1, 2014, an example of the visualizations published by Digital Humanities Quarterly could be found at this link. With each article DHQ published the following statement:

"Read about DHQ’s new publishing model, and, if you must, view the article in its original verbal form." [Boldface is my addition.]

Exactly how the visualization provided would be an adequate substitute for the full text of the article, or even a verbal abstract, remained a mystery to me when I wrote this entry on April 1, 2014.

View Map + Bookmark Entry

How the Large-Scale British Printing Industry is Adapting to the Digital Age April 12, 2014

On April 12, 2014 The New York Times published an article by Georgi Kantchev entitled "Leaner and More Efficient, British Printers Push Forward in Digital Age." From this I quote sections:

"PETERBOROUGH, ENGLAND —At a media conference a few years ago, the editor of The Guardian newspaper, contemplating the future of print, recalled his paper’s installation of its newest presses in 2005.

“ 'I had a feeling in my bones that they might be the last,' said the editor, Alan Rusbridger.

"The efforts of traditional print media executives to grope their way into the digital future have been well chronicled. But what about the executives even more tightly bound to the presses — the people who run big printing companies?

". . . . In many ways, printing itself has gone digital. Industrial-strength laser printers enable big printing plants to make quick and cost-effective small-batch runs on demand. Even Wyndeham’s big offset machines — which print from lithographic plates created from digital files — are so highly automated that a crew of just a dozen or so can put them through their paces.

“ 'This is almost a peopleless business now,' Mr. Kingston said as he walked through the huge but mostly deserted printing hall. “At one point we had 350 people in this plant. Now we have 114. But the amount of work has more than doubled.”

"Back in the 1990s, Mr. Kingston said, the plant had three presses that could turn out about 20,000 copies of a 32-page publication in an hour. Now there are two machines that are capable of producing triple that amount.

"Driving the point home, Wyndeham’s plant was about to print an issue of The Economist with a cover that read “Rise of the robots.”

“ 'People are losing their jobs and there is no way to spin that,' Mr. Kingston said. 'Now you have to be lean, mean and clean to succeed in this business.'

"In 2001, the British printing industry had around 200,000 employees. There are now fewer than 125,000, according to data from the British Printing Industries Federation.

"Britain’s printing industry, though large, is not the biggest worldwide. It is ranked fifth by revenue behind the United States, China, Japan and Germany. Yet its challenges and opportunities are emblematic.

"Sales by British printers have been in steady decline in the last 20 years, according to data from the government’s Office for National Statistics, and there is no respite in sight. The industry’s revenue is projected to shrink to about 10 billion pounds, or approximately $17 billion, by 2017, down from more than £15 billion in the 1990s, according to Key Note, a market research company. . . .

"The global printing industry, with estimated revenue of $880 billion last year, will continue to grow by about 2 percent a year until 2018, driven mainly by emerging market countries, in the view of Smithers Pira, another research company. China will probably overtake the United States as the world’s biggest print market this year, Smithers Pira said, while India will slip ahead of Britain into the No. 5 spot by 2018. . . .

"The industry’s survivors are holding on, in part by moving beyond print media publishing into packaging and labeling — parts of the physical world where a digital equivalent cannot easily follow. 'We are all buying things in cardboard boxes or in tins that have some kind of label on them,' Mr. Picard said, 'so the printing industry is really expanding its work in that area.'

"Mr. Kingston considers packaging 'the biggest growth area of printing, without any shadow of a doubt.' He cited a Swedish study of supermarket shoppers that found price was often secondary to labeling in people’s purchasing decisions. 'Most people buy products they don’t even know about, based on the packaging, based on the thing that is hanging in front of them,' he said.

"Wyndeham, like other survivors, is no longer devoted solely to paper and ink. The company has developed a product it calls “emagine,” a form of production management software that media companies can use for print and online delivery. It has also set up a division to focus on online media applications, 'whether it is on mobile, smartphones or iPads,' said Paul Utting, chief executive of Wyndeham.

"Printers are also responding to the changing desires of advertisers, who have grown accustomed to tailoring their pitches to narrow niches thanks to online media.

"Digital printing — the professional version of desktop laser or inkjet printing — makes it feasible for retailers to print bespoke catalogs aimed at individual customers’ buying preferences.

"The same applies to books and magazines, Mr. Kingston said. 'We can now make a bespoke edition of any magazine; we can bind it in a different way and use special colors. We can personalize it and send it. There is much higher added value there.' "

View Map + Bookmark Entry

Matthew Gentzkow Receives Clark Medal for Study of Media Through Big Data Sets April 18, 2014

On April 18, 2014 the University of Chicago Booth School of Business reported that  The American Economic Association named University of Chicago Booth School of Business Professor Matthew Gentzkow winner of the 2014 John Bates Clark Medal, awarded to an American economist under the age of 40 who is judged to have made the most significant contribution to economic thought and knowledge.

The Clark Medal, named after the American economist John Bates Clark, is considered one of the two most prestigious awards in the field of economics, along with the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. 

Gentzkow studies empirical, industrial organization and political economy, notably with a specific focus on media industries, using large scale data sets. His recent studies included a set of papers that looked at political bias in the news media; a second set of studies that examined the impact of television on society from several perspectives; and a third set that explored questions of persuasion. A full list of those studies is available here.

"Mr. Gentzkow, 38, has used deeply researched, data-driven projects to examine what drives ideological biases in newspapers and how the Internet is remaking the traditional media landscape.

"He has also studied the societal impact of mass media, including how student test scores were affected by the introduction of television decades ago, and how the shift by media consumers to television ultimately reduced voter turnout.

“ 'Media has been a fun area to study because it combines rich economics with political and social aspects,' Mr. Gentzkow said in a telephone interview on Thursday. With the advent of the Internet and the ability to quickly analyze huge amounts of data, 'the set of questions that can be answered using economic methods has exploded,' he said.

"As automated text analysis became widely available, for example, it became possible to examine how news is presented by rapidly scanning newspaper articles for ideologically laden terms like estate tax versus death tax, or war on terror versus war in Iraq.

“ 'Economists had thought about this, but media had been a pretty small part of economics because the data weren’t as good,' Mr. Gentzkow said. 'This work would have been impossible 20 years ago.' . . . .

"In a 2010 paper, Mr. Gentzkow and Jesse M. Shapiro, a frequent collaborator and fellow professor at Chicago Booth, found that ideological slants in newspaper coverage typically resulted from what the audience wanted to read in the media they sought out, rather than from the newspaper owners’ biases.

"Research by Mr. Gentzkow and Mr. Shapiro from 2008 found that television viewing by preschool children did not hurt their test scores during adolescence. In fact, they found, there was actually a small benefit to watching television for students in homes where English was not the main language or the mother had less than a high school education" (http://www.nytimes.com/2014/04/18/business/media/university-of-chicago-economist-who-studies-media-receives-clark-medal.html?_r=0, accessed 04-18-2014).

View Map + Bookmark Entry

The OED is Predicted to Disappear from Print April 20, 2014 – 2034

According to an article by Paraic Flanagan published in Telegraph.co.uk on April 20, 2104, the next edition of the Oxford English Dictionary (OED), the first edition of which was published between 1884 and 1928, will probably be exclusively electronic, and will probably not be published until 2034:

". . .  It’s all academic for now anyway, they say, because the third edition of the famous dictionary, estimated to fill 40 volumes, is running at least 20 years behind schedule.

"Michael Proffitt, the OED’s first new chief editor for 20 years, said the mammoth masterpiece is facing delays because 'information overload' from the internet is slowing his compilers.

"His team of 70 philologists, including lexicographers, etymologists and pronunciation experts, has been working on the latest version, known as OED3, for the past 20 years.

"Michael Proffitt revealed to Country Life magazine that the next edition will not be completed until 2034, and likely only to be offered in an online form because of its gargantuan size.

“A lot of the first principles of the OED stand firm, but how it manifests has to change, and how it reaches people has to change,” said the 48-year-old Edinburgh-born editor.

"Work on the new version, currently numbering 800,000 words, has been going on since 1994. The first edition, mooted in 1858 with completion expected in 10 years, took 70 years.

“Although the internet has made access easier,” said Mr Proffitt, “it’s also created the dilemma of information overload.

“ 'In 1989, we looked for five years’ recorded usage before a word entered the dictionary. Now, it’s 10 years because there is so much more material to sift through.

“ 'We look not only for frequency and longevity, but also breadth of use because, once a word enters the OED, it doesn’t come out. It’s a permanent record of language. I don’t think of it as a purely linguistic document, but as a part of social history.'

"He said his team working on the definition of new entries has a target of 50 to 60 words a month, slower than in the past because the world wide web has created so much more source material.

Mr Proffitt said: 'I averaged about 80 when I started because, in 1989, we didn’t have computers on our desks, so there was a limit to how much you could research. The library was our primary resource.'

"The challenge facing his team was highlighted by associate editor Peter Gilliver, who once spent nine months revising definitions for the word 'run', currently the longest single entry in the OED.

“ 'We can hear everything that’s going on in the world of English for the last 500 years, and it’s deafening,' he told the New York Times.

"If the new dictionary is printed – and publishers Oxford University Press say a print version will only appear if there is sufficient demand at the time - it will comprise 40 volumes, double the length of the second edition in 1989.

"Almost one third of a million entries were contained in the 21,730 pages of the second version of the OED, which sells for £750 and had been online since 2000, where it receives more than two million hits a month.

"The latest electronic edition of the OED acknowledges the difficulties of producing commercially-viable print versions, saying: 'The English language is far too large and diverse to be fully recordable in a dictionary, even one the size of the OED.'

"Mr Proffitt said the internet represented a lifeline for giant reference works like the OED. 'Strong works of reference have great future on the internet.

“ 'The idea is to link from the context in which people are working directly into the OED: providing information at the point at which it’s needed.'

"Other planned innovations include linking the OED to a Historical Thesaurus developed over the past 30 years by a team in Glasgow, he said" (http://www.telegraph.co.uk/culture/culturenews/10777079/RIP-for-OED-as-worlds-finest-dictionary-goes-out-of-print.html, accessed 04-21-2014.)

View Map + Bookmark Entry

The Lely Astronaut A4 Robotic Cow Milking System April 22, 2014

On April 22, 2014 The New York Times published an article by Jesse McKinley entitled, "With Farm Robotics, the Cows Decide When It's Milking Time," reporting on a robotic installation on a dairy farm in Easton, New York. More than any article I had previously read, this brought home the reality that robots had moved from the factory floor to lines of work that I would not previously have imagined. From it I quote:

"Desperate for reliable labor and buoyed by soaring prices, dairy operations across the state are charging into a brave new world of udder care: robotic milkers, which feed and milk cow after cow without the help of a single farmhand.

"Scores of the machines have popped up across New York’s dairy belt and in other states in recent years, changing age-old patterns of daily farm life and reinvigorating the allure of agriculture for a younger, tech-savvy — and manure-averse — generation.

“ 'We’re used to computers and stuff, and it’s more in line with that,' said Mike Borden, 29, a seventh-generation dairyman, whose farm upgraded to robots, as others did, when disco-era milking parlors — the big, mechanized turntables that farmers use to milk many cows at once — started showing their age.

“ 'And,' Mr. Borden added, 'it’s a lot more fun than doing manual labor.'

View Map + Bookmark Entry

Amazon May Control "More than a Third" of the U.S. Book Trade May 8, 2014

On May 8, 2014, in an article regarding a dispute between the publisher Hachette and Amazon.com published in The New York Times, David Streitfeld wrote that Amazon "controls more than a third of the book trade in the United States."

Less than a year earlier, on July 5, 2014, Streitfeld wrote that Amazon sold:

"about one in four new books, and the vast number of independent sellers on its site increases its market share even more. It owns as a separate entity the largest secondhand book network, Abebooks. And of course it has a majority of the e-book market." 

View Map + Bookmark Entry

Visionary Plans for a New City Library in Baghdad May 8, 2014

On May 8, 2014 Ibraaz.org published a dramatically illustrated article by London-based AMBS Architects on the philosophy and design of a new city library in Baghdad, Iraq: "Designing the Future. What Does It Mean to be Building a Library in Iraq?" From this I quote:

"The idea of building a new library in Iraq has been met with equal measures of impassioned hope as much as worn cynicism. AMBS Architects were commissioned to design a new library for Baghdad by the Ministry of Youth and Sport in November 2011. The supposedly simple brief of 'a modern library' for the 'youth' of Baghdad, as presented by the Ministry of Youth and Sport, required an exercise in re-learning and questioning the conventional model of a library. Our investigation into what a new library in Iraq could be was primary. The idea of 'youth' as an audience is especially significant given 63 per cent of Iraq's population are under 24, with nearly 12.8 million (43 per cent) of these under the age of 15, and a further 15 per cent between the ages of 25 and 35. Thus, we posed the question – how might people, especially young people, engage with learning in the future?

"Iraq's youth have been brought up surrounded by violence and instability. For the past decade educational services have rapidly deteriorated and opportunities for work and personal development have declined. When AMBS's founding director Ali Mousawi returned to Iraq in 2003, what he saw and still sees today is that the Iraqi youth are in many ways lost. Before 2003, Iraq had almost collapsed after a 13-year embargo and eight years of war. This kept the country isolated from the world and from modern technology.

"From the beginning, the question of what such a library can pose for the future of Iraq has been a reoccurring one. The recent turbulent history of over a decade of conflict during the occupation has left Iraq relatively isolated from the rest of the world. The neglect of Iraq's cultural resources has meant much of Iraq's literary history has been lost. Today there is a strong will to rebuild in Iraq and this project brings the hope of the re-ascendancy of intellectual life in Baghdad. As such, conceptualizing the library has demanded an understanding of the complex context and significance of raising a new library.

"The Director of the Iraq National Library and Archives (INLA), Dr Saad Eskander who has played a valuable role in preserving Iraq's literary heritage wrote about the new library:

"It is imperative for the new Iraq to consolidate its young democracy and good governance through knowledge. New libraries have a notable role to play by promoting unconditional access to information, freedom of expression, cultural diversity and transparency. By responding to the needs of Iraq's next generation, the new library, we hope, will play an important role in the future of our country. . . .

View Map + Bookmark Entry

The National September 11 Memorial Museum Opens May 15, 2014

On May 15, 2014 the National September 11 Memorial Museum opened at Ground Zero in New York City. One day earlier The New York Times published a review of the museum by Holland Cotter entitled "The 9/11 Story Told at Bedrock, Powerful as a Punch to the Gut. Sept. 11 Memorial Museum at Ground Zero Prepares for Opening."

From the standpoint of media history, I found the interactive feature, also published in The New York Times on May 14, an example of how news coverage had been changed by the Internet. The feature by Leslye Davis, Alicia Desantis, Graham Roberts and Matt Ruby was entitled, "A New Story Told at Grand Zero. The National September 11 Memorial Museum." In a magazine format the feature took you from the surface down underground and through the new museum. As you read the narrative and viewed the striking images, you passed narrated videos which automatically turned and allowed designers of the museum to explain the purposes and design of different features of the museum in their own words. It was a remarkably effective introduction to a museum recording the September 11 2001 events which, unfolded— it was estimated— before two billion people, on television, and for which the repercussions continued to be felt world wide more than a decade later.

Prior to the Internet The New York Times would have covered the story in print with more text and captioned still images. Statements by designers of the museum would have been quoted. In "A New Story Told at Ground Zero" the museum designers were given the opportunity to tell the story in their own words and in the individual sound of their own voice.

Also on May 14, 2014 The New York Times published "9/11 Artifacts and the Stories They Tell" by Stephen Farrell, which consisted of five narrated videos. Of special interest to me was the first video, The Keepers of 9/11, in which the chief curator of the September 11 Memorial Museum and officials at other New York institutions described how they selected the objects "by which future generations will remember September 11."

View Map + Bookmark Entry

The First Braille Cell Phone May 16, 2014

On May 16, 2014 BBC.com announced that London-based firm OwnFone of Islington, London, released what it called is the first Braille cell phone. The front and back of the phone was constructed using 3D printing techniques and could be customized. Other companies designed Braille phones in the past, but OwnFone said that its device was the first to go on sale. Initially the phone was available only in the UK at the retail price of £60. According to its inventor, Tom Sunderland, 3D printing of the front and back of the device helped to keep the costs down.

"The company, founded on the principles of simplicity, ease, andaffordability within the mobile phone market, gained notoriety back in 2012 when they introduced the first customizable handset which partially used 3D printing technology. A year later they introduced the 1stFone, which was targeted towards children ages 9-12. The 1stFone gave parents the ability to customize the device with buttons to call important people.

"This week OwnFone introduced the next device to their personalized phone lineup. This device, simply called the OwnFone Braille is specifically created for the vision impaired, and is the very first Braille phone available to consumers. Those interested, simply can go to the OwnFone website and customize the device. Once there, the customer has the option of choosing which names and numbers they would like programmed onto the main screen of the phone. The online system automatically converts English into braille. The customer can also customize the color for the face of the phone, or even add customized pictures if they choose, for a small additional £5 fee. Once created, the phone’s front and back, including the raised braille are 3D printed with stereolithography based technology. Tom Sunderland, the founder of OwnFone decided to use 3D printing because it was the cheapest method for creating hundreds of phones, all which have a different form to them" (http://3dprint.com/3930/ownfone-braille-3d-printed/, accessed 05-18-2014).

View Map + Bookmark Entry

The GDELT Project: The Largest Open-Access Database on Worldwide News Media May 29, 2014

On May 29, 2014 Kalev H. Leetaru announced in the Google Cloud Platform Blog that the entire quarter-billion-record GDELT Event Database (Global Data on Events, Location and Tone) was available as a public dataset in Google BigQuery. The database contained records beginning in 1979. It monitored worldwide news media in over 100 languages.

He wrote:

"BigQuery is Google’s powerful cloud-based analytical database service, designed for the largest datasets on the planet. It allows users to run fast, SQL-like queries against multi-terabyte datasets in seconds. Scalable and easy to use, BigQuery gives you real-time insights about your data. With the availability of GDELT in BigQuery, you can now access realtime insights about global human society and the planet itself!

"You can take it for a spin here. (If it's your first time, you'll have to sign-up to create a Google project, but no credit card or commitment is needed).

"The GDELT Project pushes the boundaries of “big data,” weighing in at over a quarter-billion rows with 59 fields for each record, spanning the geography of the entire planet, and covering a time horizon of more than 35 years. The GDELT Project is the largest open-access database on human society in existence. Its archives contain nearly 400M latitude/longitude geographic coordinates spanning over 12,900 days, making it one of the largest open-access spatio-temporal datasets as well.

"From the very beginning, one of the greatest challenges in working with GDELT has been in how to interact with a dataset of this magnitude. Few traditional relational database servers offer realtime querying or analytics on data of this complexity, and even simple queries would normally require enormous attention to data access patterns and intricate multi-column indexing to make them possible. Traditional database servers require the creation of indexes over the most-accessed columns to speed queries, meaning one has to anticipate apriori how users are going to interact with a dataset. 

"One of the things we’ve learned from working with GDELT users is just how differently each of you needs to query and analyze GDELT. The sheer variety of access patterns and the number of permutations of fields that are collected together into queries makes the traditional model of creating a small set of indexes impossible. One of the most exciting aspects of having GDELT available in BigQuery is that it doesn’t have the concept of creating explicit indexes over specific columns – instead you can bring together any ad-hoc combination of columns and query complexity and it still returns in just a few seconds. This means that no matter how you access GDELT, what columns you look across, what kinds of operators you use, or the complexity of your query, you will still see results pretty much in near-realtime. 

"For us, the most groundbreaking part of having GDELT in BigQuery is that it opens the door not only to fast complex querying and extracting of data, but also allows for the first time real-world analyses to be run entirely in the database. Imagine computing the most significant conflict interaction in the world by month over the past 35 years, or performing cross-tabbed correlation over different classes of relationships between a set of countries. Such queries can be run entirely inside of BigQuery and return in just a handful of seconds. This enables you to try out “what if” hypotheses on global-scale trends in near-real time.

"On the technical side, BigQuery is completely turnkey: you just hand it your data and start querying that data – that’s all there is to it. While you could spin up a whole cluster of virtual machines somewhere in the cloud to run your own distributed clustered database service, you would end up spending a good deal of your time being a systems administrator to keep the cluster working and it wouldn’t support BigQuery’s unique capabilities. BigQuery eliminates all of this so all you have to do is focus on using your data, not spending your days running computer servers. 

"We automatically update the public dataset copy of GDELT in BigQuery every morning by 5AM ET, so you don’t even have to worry about updates – the BigQuery copy always has the latest global events. In a few weeks when GDELT unveils its move from daily updates to updating every 15 minutes, we’ll be taking advantage of BigQuery’s new stream updating capability to ensure the data reflects the state of the world moment-by-moment.

"Check out the GDELT blog for future posts where we will showcase how to harness some of BigQuery’s power to perform some pretty incredible analyses, all of them running entirely in the database system itself. For example, we’re particularly excited about the ability to use features like BigQuery’s new Pearson correlation support to be able to search for patterns across the entire quarter-billion-record dataset in just seconds. And we can’t wait to see what you do with it. . . ." 

Regarding GDELT, in April 2013 Leetaru and co-developer of the project, Philip A. Schrodt, presented an illustrated paper at the International Studies Association meetings held in San Francisco: GDELT: Global Data on Events, Location and Tone, 1979-2012.

View Map + Bookmark Entry

The Enigma Database for Deciphering Difficult to Read Words in Medieval Latin Manuscripts June 2014

Enigma, a database developed by the Digital Humanities program of the CIHAM - UMR 5648 research center at CNRS- Université Lyon 2-EHESS- ENS de Lyon- Université d'Avignon et des Pays de Vlaucluse- Université Lyon 3, was designed to help scholars decipher difficult to read words in medieval Latin manuscripts. 

"If you type the letters you can read and add wildcards, Enigma will list the possible Latin forms, drawing from its database of more than 400,000 forms. 
Nota bene: Enigma does NOT solve abbreviations. To do so, you can resort to A. Cappelli's famous dictionary, available online (ed. Milan, 1912, and ed. Leipzig, 1928). If you cannot resolve an abbreviation, replace it by a wildcard in your Enigma query. . . ." 
View Map + Bookmark Entry

Thieves of Gutenberg Bible are Sentenced in Russia June 6, 2014

On June 6, 2014 BBC.com reported that Russia sentenced three agents belonging to its Federal Security Service (FSB) for attempting to sell a Gutenberg Bible that had been stolen from Lomonosov Moscow State University. Colonel Sergei Vedischev was sentenced to more than three years in a penal colony. His two accomplices received a lighter sentence for trying to find a buyer. The thieves offered the Bible to a collector for under $1.15 million dollars, perhaps one-twentieth of its value in 2014.

Regarding the theft, Eric White commented on the Ex-Libris newsgroup on June 8, 2014:

"This is the paper copy that had been in Leipzig until the Soviet army took it in 1945; it was held secretly until just a few years ago.  No one I know has actually been able to see it."

On June 5, 2014 police-russia.info reported on the theft in Russian, and in more detail, at this link.

View Map + Bookmark Entry

A Robot Writes Out a Torah Scroll at Berlin's Jewish Museum July 10, 2014

On July 10, 2014 Phys.org reported on the demonstration of a Torah-writing robot at the Jewish Museum Berlin, Germany (Jüdisches Museum Berlin) by the German artists group robotlab.

". . . . While it takes the machine about three months to complete  the 80-meter (260-foot) -long scroll, a rabbi or a sofer—a Jewish scribe—needs nearly a year. But unlike the rabbi's work, the robot's Torah can't be used in a synagogue.

" 'In order for the Torah to be holy, it has to be written with a goose feather on parchment, the process has to be filled with meaning and I'm saying prayers while I'm writing it,' said Rabbi Reuven Yaacobov. The Berlin rabbi curiously eyed the orange-painted robot as it ceaselessly wrote down the first book of Moses. Yaacobov then showed visitors the traditional way of writing the Torah the way it's been done for thousands of years.

"Matthias Gommel from robotlab said the robot initially wrote down the Christian Bible in German, Spanish and Portuguese before it was reprogrammed with the help of an Israeli graphic designer."

View Map + Bookmark Entry

Sotheby's Officially Teams with eBay for Online Auctions July 14, 2014

On July 14, 2014 The New York Times published an article entitled, "A Warhol With Your Moosehead? Sotheby's Teams with eBay" by Carol Vogel and Mike Isaac, from which I quote:

"Convinced that consumers are finally ready to shop online for Picassos and choice Persian rugs in addition to car parts and Pez dispensers, Sotheby’s, the blue-chip auction house, and eBay, the Internet shopping giant, plan to announce Monday that they have formed a partnership to stream Sotheby’s sales worldwide.

"Starting this fall, most of Sotheby’s New York auctions will be broadcast live on a new section of eBay’s website. Eventually the auction house expects to extend the partnership, adding online-only sales and streamed auctions taking place anywhere from Hong Kong to Paris to London. The pairing would upend the rarefied world of art and antiques, giving eBay’s 145 million customers instant bidding access to a vast array of what Sotheby’s sells, from fine wines to watercolors by Cézanne.

"This isn’t the first time the two companies have teamed up; a 2002 collaboration fizzled after only a year. But officials say the market has matured in recent years, making the moment right for a new collaboration.

"The announcement comes just months after the activist shareholder Daniel S. Loeb criticized Sotheby’s for its antiquated business practices, likening the company to “an old painting in desperate need of restoration” and calling for directors there to beef up its online sales strategy. It also signals a new phase in Sotheby’s age-old rivalry with Christie’s. After years of running neck and neck, Sotheby’s has recently been losing business to its main competitor — and Christie’s is planning its own bold move to capture more online business, a $50 million investment that will include more Internet-only auctions and a redesigning of its website scheduled for October.

"Online auctions are not new to either auction house. Registered bidders can compete in certain sales in real time with the click of a mouse. What is new is the way Sotheby’s is trying to reach beyond its traditional customers to an enormous affluent global audience for whom online buying has become second nature. Luxury shopping websites like Gilt and 1st Dibs, with their broad mix of décor, designer fashion and antiques, have shown that shoppers are willing to spend many thousands of dollars on everything from handbags to sconces without inspecting them in person. And while the auction houses are seeing their online bidding grow — Sotheby’s, for example, says its sales on its website increased 36 percent in 2013 over the previous year — they believe the full potential of online sales has yet to be tapped."

View Map + Bookmark Entry

The Impact of Social Media on Journalism July 27, 2014

On July 27, 2014 media journalist David Carr published a column in The New York Times, entitled "At Front Lines, Bearing Witness in Real Time." The column was of special interest for its historical perspective on the rapidly evolving influence of social media on journalism. From it I quote:

"Geopolitics and the ubiquity of social media have made the world a smaller, gorier place. If Vietnam brought war into the living room, the last few weeks have put it at our fingertips. On our phones, news alerts full of body counts bubble into our inbox, Facebook feeds are populated by appeals for help or action on behalf of victims, while Twitter boils with up-to-the-second reporting, some by professionals and some by citizens, from scenes of disaster and chaos.

"For most of recorded history, we have witnessed war in the rearview mirror. It took weeks and sometimes months for Mathew Brady and his associates’ photos of the bloody consequences of Antietam to reach the public. And while the invention of the telegraph might have let the public know what side was in ascent, images that brought a remote war home frequently lagged.

"Then came radio reports in World War II, with the sounds of bombs in the background, closing the distance between men who fought wars and those for whom they were fighting. Vietnam was the first war to leak into many American living rooms, albeit delayed by the limits of television technology at the time. CNN put all viewers on a kind of war footing, with its live broadcasts from the first gulf war in 1991.

"But in the current news ecosystem, we don’t have to wait for the stentorian anchor to arrive and set up shop. Even as some legacy media outfits have pulled back, new players like  Vice and BuzzFeed have stepped in to sometimes remarkable effect. Citizen reports from the scene are quickly augmented by journalists. And those journalistic boots on the ground begin writing about what they see, often via Twitter, before consulting with headquarters about what it all means. . . ."

"Bearing witness is the oldest and perhaps most valuable tool in the journalist’s arsenal, but it becomes something different delivered in the crucible of real time, without pause for reflection. It is unedited, distributed rapidly and globally, and immediately responded to by the people formerly known as the audience.

 "It has made for a more visceral, more emotional approach to reporting. War correspondents arriving in a hot zone now provide an on-the-spot moral and physical inventory that seems different from times past. That emotional content, so noticeable when Anderson Cooper was reporting from the Gulf Coast during Hurricane Katrina in 2005, has now become routine, part of the real-time picture all over the web.

 "The absence of the conventional layers of journalism — correspondents filing reports that are then edited for taste and accuracy — has gotten several journalists in trouble, mostly for responding in the moment to what they saw in front of them."

View Map + Bookmark Entry

Amazon States its Economic Position on the Sale of E-Books in the "Amazon/Hachette Business Interruption" July 29, 2014

On July 29, 2014 in its Kindle forum blog Amazon.com, which probably controlled more than one-third of the book trade in the United States, stated its position on the very public dispute between Amazon and Hachette publishers over the terms of e-book sales. Amazon believed that the price of $9.99 was the optimal price at which sales of most e-book titles would be maximized. They also believed that 35% of the revenue gained from e-book sales should go to the author, 35% to the publisher, and 30% to Amazon. This global proposal for e-book distribution and income sharing represented a huge sea change in the traditional economics of printed book publishing, under which publishers set retail prices on a title by title basis, authors rarely received more than 10% of revenue received from printed books, and trade discounts were negotiated between the publisher and distributors and book stores. The Amazon Books team wrote:

"A key objective is lower e-book prices. Many e-books are being released at $14.99 and even $19.99. That is unjustifiably high for an e-book. With an e-book, there's no printing, no over-printing, no need to forecast, no returns, no lost sales due to out-of-stock, no warehousing costs, no transportation costs, and there is no secondary market -- e-books cannot be resold as used books. E-books can be and should be less expensive.

"It's also important to understand that e-books are highly price-elastic. This means that when the price goes up, customers buy much less. We've quantified the price elasticity of e-books from repeated measurements across many titles. For every copy an e-book would sell at $14.99, it would sell 1.74 copies if priced at $9.99. So, for example, if customers would buy 100,000 copies of a particular e-book at $14.99, then customers would buy 174,000 copies of that same e-book at $9.99. Total revenue at $14.99 would be $1,499,000. Total revenue at $9.99 is $1,738,000.

"The important thing to note here is that at the lower price, total revenue increases 16%. This is good for all the parties involved:

* The customer is paying 33% less.

* The author is getting a royalty check 16% larger and being read by an audience that's 74% larger. And that 74% increase in copies sold makes it much more likely that the title will make it onto the national bestseller lists. (Any author who's trying to get on one of the national bestseller lists should insist to their publisher that their e-book be priced at $9.99 or lower.)

* Likewise, the higher total revenue generated at $9.99 is also good for the publisher and the retailer. At $9.99, even though the customer is paying less, the total pie is bigger and there is more to share amongst the parties.

"Keep in mind that books don't just compete against books. Books compete against mobile games, television, movies, Facebook, blogs, free news sites and more. If we want a healthy reading culture, we have to work hard to be sure books actually are competitive against these other media types, and a big part of that is working hard to make books less expensive.

"So, at $9.99, the total pie is bigger - how does Amazon propose to share that revenue pie? We believe 35% should go to the author, 35% to the publisher and 30% to Amazon. Is 30% reasonable? Yes. In fact, the 30% share of total revenue is what Hachette forced us to take in 2010 when they illegally colluded with their competitors to raise e-book prices. We had no problem with the 30% -- we did have a big problem with the price increases.

"Is it Amazon's position that all e-books should be $9.99 or less? No, we accept that there will be legitimate reasons for a small number of specialized titles to be above $9.99. 

"One more note on our proposal for how the total revenue should be shared. While we believe 35% should go to the author and 35% to Hachette, the way this would actually work is that we would send 70% of the total revenue to Hachette, and they would decide how much to share with the author. We believe Hachette is sharing too small a portion with the author today, but ultimately that is not our call."

View Map + Bookmark Entry

As E-Books Gain Market Share Traditional Roles of Publisher and Bookseller Change July 30, 2014

An article by Jason Abbruzzese and Katie Nelson entitled "How Amazon Brought Publishing to its Knees — and Why Authors Might be Next" published on Mashable.com on July 30, 2014 stated that, according to the Codex Group, by this date Amazon controlled more than two-thirds of the U.S. online book market. It controlled 67% of the sale of e-books, 41% of all new book purchases (print and digital) and 65% of online book purchases (print and digital). From it I quote:

"As other media industries like music and magazine/newspaper publishing suffered from declines, e-books took hold quickly as a revenue source, particularly after Amazon introduced the Kindle in 2007.

"Other media segments were not so lucky. The music industry suffered a revenue decline of more than 50% from a high of $14.6 billion in 1999 to $6.3 billion in 2009. Book publishing has not had to endure any such contraction.

" 'We have to give a tremendous amount of credit to Amazon and Jeff Bezos and his team for the investment that they were willing to make in those years,' Entrekin said. 'They did it in an orderly manner, in a way you could trust, and it's helped us.'

"There was a time when e-books were just a small part of the overall market, but now e-books are reaching parity with print. In 2013, some 457 million e-books were sold vs. 557 million hardcovers, according to the Association of American Publishers and the Book Industry Study Group. (Paperbacks were not included in that estimate.)

"The sales growth magnifies publishers' unease with the with the $9.99 price point that CEO Jeff Bezos had decided on — a number that had no basis in economics but rather in psychological pricing, according to Brad Stone's defining book on Amazon, The Everything Store. Amazon recently defended that price in a blog post, claiming it is better for consumers, publishers and authors.

"The $9.99 e-book introduction came after publishers had already seen the prices of books fall as chain stores like Barnes & Noble and Borders drove out independent sellers through lower pricing.

"Publishers accepted that, said David Vandagriff, an attorney who has spent decades representing both authors and publishers, but they never quite cottoned to the $9.99 e-book. That price point continues to cause problems and is believed to be the primary sticking point between Amazon and Hachette.

" 'The publishers, they had to resign themselves to Barnes & Noble, but they didn't go through that process quite as well or quite as thoroughly with Amazon," he said. 'They always thought Amazon was underpricing.' "

The article went on to present a chart predicting that e-books would surpass the sale of printed books in 2017.  It also pointed to the increasing trend of authors self-publishing their books so that they effectively received 70% of the proceeds with Amazon taking 30%. This left out the traditional roles of publishers and booksellers altogether. 

View Map + Bookmark Entry

The First Production-Scale Neuromorphic Computing Chip August 8, 2014

On August 8, 2014 scientists from IBM and Cornell University, including Paul A. MerollaJohn V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, and Dharmendra S. Modha, reported in the journal Science the first production-scale neuromorphic computing chip—a significant landmark in the development of cognitive computing. The chip, named TrueNorth, attempted to mimic the way brains recognize patterns, relying on densely interconnected webs of transistors similar to neural networks in the brain. It employed an efficient, scalable, and flexible non–von Neumann architecture. Von Neumann architecture, in which memory and processing were separated, and information flowed back and forth between the two components, remained the standard computer architecture from the design of the earliest electronic computers to 2014, so the new neuromorphic chip design represented a radical departure. 

"The chip contains 5.4 billion transistors, yet draws just 70 milliwatts of power. By contrast, modern Intel processors in today’s personal computers and data centers may have 1.4 billion transistors and consume far more power — 35 to 140 watts.

"Today’s conventional microprocessors and graphics processors are capable of performing billions of mathematical operations a second, yet the new chip system clock makes its calculations barely a thousand times a second. But because of the vast number of circuits working in parallel, it is still capable of performing 46 billion operations a second per watt of energy consumed, according to IBM researchers.

"The TrueNorth has one million 'neurons,' about as complex as the brain of a bee.

“ 'It is a remarkable achievement in terms of scalability and low power consumption,' said Horst Simon, deputy director of the Lawrence Berkeley National Laboratory.

"He compared the new design to the advent of parallel supercomputers in the 1980s, which he recalled was like moving from a two-lane road to a superhighway.

"The new approach to design, referred to variously as neuromorphic or cognitive computing, is still in its infancy, and the IBM chips are not yet commercially available. Yet the design has touched off a vigorous debate over the best approach to speeding up the neural networks increasingly used in computing.

"The idea that neural networks might be useful in processing information occurred to engineers in the 1940s, before the invention of modern computers. Only recently, as computing has grown enormously in memory capacity and processing speed, have they proved to be powerful computing tools" (John Markoff, "IBM Designs a New Chip that Functions Like A Brain," The New York Times, August 7, 2014).

Merolla et al, "A million spiking-neuron integrated circuit with a scalable communication network and interface," Science 345 no. 6197 (August 8, 2014) 668-673.

"Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts" (Abstract).

View Map + Bookmark Entry

Media Companies Spin Off Newspapers to Create Stand-Alone "Print Companies" August 10, 2014

On August 10, 2014, which was, incidentally, my mother Rachel's 96th birthday, media columnist David Carr of The New York Times published published a column entitled "Print is Down, and Now Out. Media Companies Spin Off Newspapers to Uncertain Futures."  The primary issue that Carr raised was that newspapers—virtually all of which were still published both in print and online editions — could not generate enough advertising revenue fast enough, to satisfy the growth demands required by Wall Street investors. For this reason media companies, which derived most of their income from television, spun off their newspaper divisions. From his column I quote selections:

"A year ago last week, it seemed as if print newspapers might be on the verge of a comeback, or at least on the brink of, well, survival.

"Jeff Bezos, an avatar of digital innovation as the founder of Amazon, came out of nowhere and plunked down $250 million for The Washington Post. His vote of confidence in the future of print and serious news was seen by some — including me — as a sign that an era of “optimism or potential” for the industry was getting underway.

"Turns out, not so much — quite the opposite, really. The Washington Post seems fine, but recently, in just over a week, three of the biggest players in American newspapers — GannettTribune Company and E. W. Scripps, companies built on print franchises that expanded into television — dumped those properties like yesterday’s news in a series of spinoffs. . . .

"The persistent financial demands of Wall Street have trumped the informational needs of Main Street. For decades, investors wanted newspaper companies to become bigger and diversify, so they bought more newspapers and developed television divisions. Now print is too much of a drag on earnings, so media companies are dividing back up and print is being kicked to the curb.

"Setting aside the brave rhetoric — as one should — about the opportunity for a “renewed focus on print,” those stand-alone print companies are sailing into very tall waves. Even strong national newspapers like The Wall Street Journal and The New York Times are struggling to meet Wall Street’s demands for growth; the regional newspapers that make up most of the now-independent publishing divisions have a much grimmer outlook.

"As it turns out, the journalism moment we are living in is more about running for your life than it is about optimism. Newspaperscontinue to generate cash and solid earnings, but those results are not enough to satisfy investors.

"Even the most robust evangelism is belied by the current data. Robert Thomson, chief executive of News Corporationespoused the “power of print” on Thursday even as he announced that advertising revenue at the company plunged 9 percent in the most recent quarter.

"And remember that it was Mr. Thomson’s boss, Rupert Murdoch, who started the wave of print divestitures when his company divorced its newspapers last year, although it did pay out $2 billion in alimony, which gave the publications, including The Journal, a bit of a cash cushion. (News Corporation’s tepid earnings report came two days after Mr. Murdoch, who has swashed more buckles and cut more deals than almost anyone, was forced by the market to let go of his latest prey, Time Warner.)

"The people at the magazine business Time Inc. were not so lucky, burdened with $1.3 billion in debt when Time Warner threw them from the boat. Swim for your life, executives at the company seemed to be saying, and by the by, here’s an anchor to help you on your way.

View Map + Bookmark Entry

What Would George Orwell Think About the Amazon versus Hachette Dispute? August 14, 2014

On August 14, 2014 The New Yorker published an insightful commentary by American journalist, novelist and playwright George Packer on the Amazon.com dispute with Hachette publishers over e-books. The article, entitled "Amazon vs. Hachette: What Would Orwell Think?", framed the dispute in the context of George Orwell's ideas about changes in the publishing industry during the 1930s. From it I quote the first few paragraphs:

“ 'Review of Penguin Books' might qualify as the single most obscure thing that George Orwell ever wrote. It was published in the March 5, 1936, issue of New English Weekly, when the writer was thirty-two years old. Like other struggling novelists, Orwell was doing a lot of reviewing to get his name in print, and, in this case, he’d undertaken the thankless task of reviewing a batch of ten Penguin paperbacks, sold at sixpence apiece, including such immortal titles as The Owls’ House,' by Crosbie Garstin, and “Dr. Serocold,” by Helen Ashton. A few years ago, when I was compiling a two-volume edition of Orwell’s essays, 'Review of Penguin Books' would not have made the long list even if I’d remembered that it existed. All the stranger, then, that this eight-decade-old trifle surfaced last weekend in the business dispute between Amazon and Hachette and, from there, moved onto Twitter and into theTimes.

 "For the past four months, Amazon has been making it hard for customers to buy Hachette books while the two companies fight over e-book terms. A group of nine hundred writers signed an open letter that appeared in Sunday’s [New York] Times, calling on Amazon to leave authors out of the contract dispute. The Amazon Books Team preëmptively replied by creating a Web site called readersunited.com. (In politics, this is known as “grasstops”—a fake-grassroots campaign created by special interests.) The Web site provided Kindle customers with talking points to help convince the C.E.O. of Hachette to 'stop working so hard to overcharge for ebooks.'

"Members of the Amazon Books Team—they aren’t stupid—unearthed Orwell’s 'Review of Penguin Books' and quoted its first sentence: 'The Penguin Books are splendid value for sixpence, so splendid that if the other publishers had any sense they would combine against them and suppress them.' In 1936, mass-market paperbacks were a new technological innovation, as e-books are now. Orwell thought that cheaper books would not lead to higher revenues, because buyers would spend less money on books and more 'on seats at the ‘movies.’ ' Readers might benefit—publishers, booksellers, and authors wouldn’t. Amazon chided Orwell for his shortsightedness: '[W]hen a thing has been done a certain way for a long time, resisting change can be a reflexive instinct, and the powerful interests of the status quo are hard to move. It was never in George Orwell’s interest to suppress paperback books—he was wrong about that.' No one could miss the analogy to the e-book revolution and its critics.

"The Times jumped in to mock Amazon for missing the note of irony in Orwell’s words. He wasn’t actually urging the formation of a publishing cartel, just pointing out the danger that cheap paperbacks posed for “the publisher, the compositor, the author and the bookseller.” In fact, the advent of paperbacks left Orwell ambivalent. “In my capacity as reader I applaud the Penguin Books; in my capacity as writer I pronounce them anathema,” he wrote. Orwell didn’t have much of a head for business, and his powers of prediction sometimes failed him. In another 1936 piece, “Bookshop Memories,” he wrote, “The combines can never squeeze the small independent bookseller out of existence as they have squeezed the grocer and the milkman.” Around the same time, he argued that the Nazi threat wasn’t as dangerous as a war of the “imperialist powers.” (He changed his mind about that.) So maybe Orwell was wrong about paperbacks, too. But then, so is Amazon."

Reading Packer's complete article is highly recommended.

Ironically, in 2009 Amazon had previous run-in with Orwellian concepts when Amazon sent Orwell's e-books down the "memory hole." 

View Map + Bookmark Entry

The Entirely Digital Main Library at Florida Polytechnic University Begins Operation August 16, 2014

On August 16, 2014 the entirely new Florida Polytechnic University opened for classes at its new campus in Lakeland, Florida, designed by Santiago Calatrava. Its new library housed no physical books, but instead opened with a virtual collection of around 135,000 ebooks. As far as I knew at the time, this was the first main library at a university to open with no physical books. Previously, in 2005 the main library at the University of California, Merced, opened with only a small collection of physical books.

According to an article published in The Guardian on August 29, 2014 by Alison Flood, the director of libraries at Florida Polytechnic, Kathryn Miller said, "We have access to print books through the state university system's interlibrary loan program. However, we strongly encourage our students to read and work with information digitally."

Alison Flood continued: 

"A budget of $60,000 (£36,000) has also been set aside for students to read ebooks that the library doesn't already own. Once a book has been viewed twice on this system, it will be automatically purchased. The set-up, said Miller, 'allows for many more books to be available for the students, and the university only has to pay when the student or faculty member uses the book', allowing students 'to make direct choices regarding the books they want to read and have available in the library'.

"The new university offers courses exclusively in science, technology, engineering and mathematics, and Miller said that one of its objectives was to 'prepare students for the high-tech workforce by giving them hands-on experience with advanced technology'. 

" 'The ability to read, absorb, manage and search digital documents and conduct digital research are skills of growing importance in industry,' she said, with the new digital-only library 'designed to help students become better technology users and learners' ".

View Map + Bookmark Entry

Putting the Footnotes in a Physical Book on the Author's Website August 18, 2014

On August 18, 2014 Stacey Patton of the Chronicle of Higher Education published an article entitled "Wait, Your Footnotes Are in Cyberspace?" This described the decision of social historian Rick Perlstein to put the extensive footnotes for his book, The Invisible Bridge. The Fall of Nixon and the Rise of Reaganon his website together with links to the original source documents rather than to have them published at the back of the physical edition of his book.

Patton wrote:

“ 'We’re all discussing the invisible bridge between the arguments made in Perlstein’s book and the citations living elsewhere online,' said Mick Gusinde-Duffy, editor in chief at the University of Georgia Press.

"The mammoth political biography checks in at 880 pages—and retails for $37.50—but it contains no bibliography and not a single endnote or footnote. That’s because Perlstein and his publisher, Simon & Schuster, kept the citations out of the book. Instead he posted a full list on his personal website, rickperlstein.net.

“ 'Putting linkable notes online was an innovation,' said Perlstein in a phone interview last week. 'I wanted my scholarship to be as transparent as possible. I wanted the experience to be fun for readers and useful for teachers of history.'

"Perlstein said his decision was based on the lack of conversation among readers about the sources he used in his first two hit books, Nixonland and Before the Storm. This time, he said, he wants to foster more interaction with those readers.

“ 'I was frustrated that not enough people were engaging with my notes,' he said. 'I began to feel like this chunk of 150 pages of notes was deadweight, and that they’ve stopped serving any kind of scholarly, civic, or literary function. They’re useless for most readers.'

"The idea was not without precedent. Perlstein said he was inspired by Tony Judt, whose book Postwar: A History of Europe Since 1945, promised to omit print citations in favor of digital ones. Postwar was a finalist for the Pulitzer Prize. (The Pulitzer board felt that Judt’s work was outstanding, according to David M. Kennedy, a historian at Stanford University who sat on the committee, but it ultimately decided that it could not award a prize to a book without citations.)

“ 'I knew I’d be taking a risk and starting a conversation,' Perlstein said. 'And it’s a debate that I welcome.' "

View Map + Bookmark Entry

IBM Launches "Watson Discovery Advisor" to Hasten Breakthroughs in Scientific and Medical Research August 27, 2014

On August 27, 2014 IBM launched Watson Discovery Advisor, a computer system that could quickly identify patterns in massive amounts of data, with the expectation that this system would hasten breakthroughs in science and medical research. The computer system, which IBM made available through the cloud, understood chemical compound interaction and human language, and could visually map out connections in data. The system used  a number of computational techniques to deliver its results, including natural language processing, machine learning and hypothesis generation, in which a hypothesis is created and evaluated by a number of different analysis techniques. Baylor College of Medicine used the service to analyze 23 million abstracts of medical papers in order to find more information on the p53 tumor protein, in search of more information on how to turn it on or off. From these results, Baylor researchers identified six potential proteins to target for new research. Using traditional methods it typically took researchers about a year to find a single potentially useful target protein, IBM said.

According to an article by Reuters published in The New York Times,

"Some researchers and scientists have already been using Watson Discovery Advisor to sift through the sludge of scientific papers published daily.

"Johnson & Johnson is teaching the system to read and understand trial outcomes published in journals to speed up studies of effectiveness of drugs.

"Sanofi, a French pharmaceutical company is working with Watson to identify alternate uses for existing drugs.

" 'On average, a scientist might read between one and five research papers on a good day,' said Dr. Olivier Lichtarge, investigator and professor of molecular and human genetics, biochemistry and molecular biology at Baylor College of Medicine.

"He used Watson to automatically analyze 70,000 articles on a particular protein, a process which could have taken him nearly 38 years.

“ 'Watson has demonstrated the potential to accelerate the rate and the quality of breakthrough discoveries,' he said."

View Map + Bookmark Entry

Indexing and Sharing 2.6 Million Images from eBooks in the Internet Archive August 29, 2014

On August 29, 2014 the Internet Archive announced that data mining and visualization expert Kalev Leetaru, Yahoo Fellow at Georgetown University, extracted over 14 million images from two million Internet Archive public domain eBooks spanning over 500 years of content. Of the 14 million images, 2.6 million were uploaded to Flickr, the image-sharing site owned by Yahoo, with a plan to upload more in the near future. 

Also on August 29, 2014 BBC.com carried a story entitled "Millions of historic images posted to Flickr," by Leo Kelion, Technology desk editor, from which I quote:

"Mr Leetaru said digitisation projects had so far focused on words and ignored pictures.

" 'For all these years all the libraries have been digitising their books, but they have been putting them up as PDFs or text searchable works,' he told the BBC.

"They have been focusing on the books as a collection of words. This inverts that. . . .

"To achieve his goal, Mr Leetaru wrote his own software to work around the way the books had originally been digitised.

"The Internet Archive had used an optical character recognition (OCR) program to analyse each of its 600 million scanned pages in order to convert the image of each word into searchable text.

"As part of the process, the software recognised which parts of a page were pictures in order to discard them.

"Mr Leetaru's code used this information to go back to the original scans, extract the regions the OCR program had ignored, and then save each one as a separate file in the Jpeg picture format.

"The software also copied the caption for each image and the text from the paragraphs immediately preceding and following it in the book.

"Each Jpeg and its associated text was then posted to a new Flickr page, allowing the public to hunt through the vast catalogue using the site's search tool. . . ."

View Map + Bookmark Entry

A New Threat to the Beseiged Libraries and Cultural Heritage of Iraq September 8, 2014

In September 2014 I learned through the mailing list of the Middle East Librarians Association of the report dated September 8, 2014 by iraqi archaeologist and Iraq Heritage Senior Fellow, Abdulameer al-Hamdani, entitled Iraq's Heritage is Facing a New Wave of Destruction. This report I quote in full:

"Since early June, extremist armed groups, including ISIS, have controlled most of north-west of Iraq, from Mosul downward to Falouja on the Euphrates and Tikrit on the Tigris. According to ISIS law, archaeological sites, museums and artifacts, shrines and tombs, non-Islamic, and even non-Sunni worship places, modern statues and monuments, and libraries should not be existed and must be demolished.

"More than 4000 archaeological sites that are located in areas that have been controlled by ISIS are facing a serious threatening either by looting or destruction. The staff, as well as the archaeological sites' guards, of the antiquities' inspectorate of Ninawa province and other districts can't do their daily work in visiting and observing sites because both the security issues and the lack of fuel and vehicles to be used.

"The well-preserved fascinating Assyrian capitals of Nineveh, Nimrud, Khorsabad, and Ashur, as well as hundreds of ancient Mesopotamian sites, are targets that are going to be stolen and destroyed by ISIS. ISIS wants to diversify and expand its financial resources to include the lucrative trade of antiquities. For instance, on July 12th, a group of armed looters attacked the ancient city of Nimrud and stole a unique bas-relief from its palace that dates back to the Neo-Assyrian Empire in the eighth century BCE. Hatra, a Hellenistic city from the second century BCE, is isolated in the desert south west of Mosul, in an area that has been used by ISIS to train its fighters. Mosul museum, the second large museum in Iraq, has been occupied by ISIS and its staff cannot inter to check its valuable collections. ISIS plans to put statues from the museum on trial with plans to smash some statues and sell some. ISIS evacuated the houses around the Hadba-Leaning minaret that date to the twelfth century. It is not certain if the minaret is intended or the shrine of Ali al-Hadi who is buried in the Nidhamia School, which dates back to the same date of the minaret.

"Not only ancient Mesopotamian heritage has been destroyed, the Christian and Ezidian heritage and other religious and ethnic minorities in Mosul and Nineveh plain were targeted. Churches and monasteries either burned or occupied where ISIS stole the contents and put its flag upon them. St. Behnam monastery south-east on Mosul was occupied by ISIS and has been converted to be its headquarter in the region. The Virgin Mary Church in Mosul was blown and the image of the Mary statue torn down from the top is an older archive image.

"Even the Islamic heritage is also not spared from destruction. Apparently after the fundamentalists destroyed all the Shia mosques in Mosul and the other towns, they have now turned to the Sunni shrines. The Sunni shrines were destroyed by explosions and bulldozers; these include the shrines of Sheikh Fathi, Ibn al-Atheer, and Sultan Abdullah Bin Asim, the grandson of Caliph Omer. Before that, ISIS has exploded and demolished Shia and Sufi shrines and worship places in Mosul, Telafer, and Kirkuk. Among these shrines was the iconic-domed shrine of Yahya Bin al-Qasim in Mosul, which dates back to 13th century.

"The shrines of the prophets Daniel (Nabi Danial), Shayth (Nabi Sheeth), and zarzis (Nabu Jarjees) have also been destroyed. But nothing affected and harmed Iraqis like the demolishing and exploding of Prophet Jonah's shrine, the well-known as Nabi Yunis, which is respected by all Iraqis from different religions and ethnicities. The Shrine’s iconic minaret was from 1924 it replaced the Ottoman one that collapsed. However the fear is for what underneath the shrine, the Assyrian Palace, which has unusual winged-bulls were uncovered in the 1990 and some of them are visible. The city of Mosul has about two hundred heritage buildings, many are of the Ottoman Period, and some are still being used as government buildings. A number have already gone, destroyed by ISIS, the Sarai was the police headquarters and the Ottoman hospital, the head quarter of the Intelligence, and it was raised to the ground.

"Modern monuments and statues in Mosul have been smashed or removed. Among them was the statue of Abu Tammam, an Abbasid poet, who was died in Mosul in 845 AD, as well as the statue of Mulla Uthman al-Mosuly, a singer, musician and poet, who was born in Mosul in 1845. ISIS has also has took over public libraries in Ninawa and Diyala provinces. At Mosul University, ISIS met with some of the academics and informed that the College of Arts will be closed; some of Departments at the College of Archaeology will be closed. There will be a change of the entire Curriculum.

"The international community should support Iraq in protecting its rich and diverse cultural heritage. According to the international legislations and the united nation agreements, the international community has to do its legal, humanitarian and cultural responsibilities to protect the cultural heritage of countries under risk such as Iraq these days. The Iraqi neighbour countries should don’t allow for smuggling aboard the stolen artefacts from Iraq. Protecting Iraq’s cultural heritage is a global task for it is the
memory of the humankind."

View Map + Bookmark Entry

An Anthology to be Published on Paper in 2114 September 13, 2014

In September 2014 Scottish artist Katie Paterson announced on Futurelibrary.no that 

"A forest has been planted in Norway, which will supply paper for a special anthology of books to be printed in one hundred years time. Between now and then, one writer every year will contribute a text, with the writings held in trust, unpublished, until 2114.

"The texts will be held in a specially designed room in the New Public Deichmanske Library, Oslo. Tending the forest and ensuring its preservation for the 100-year duration of the artwork finds a conceptual counterpoint in the invitation extended to each writer: to conceive and produce a work in the hopes of finding a receptive reader in an unknown future." 

Future Library, Katie Paterson from Katie Paterson on Vimeo.

In response to Patterson's announcement The New York Times published the following editorial:

"The hope that creative work survives its creator is usually empty. Shakespeare boasted that his sonnets would outlast monuments and the memory of princes, and they have. But it’s rare for an artist to keep audiences interested over generations. Most creative endeavors are ravaged by what Shakespeare called “sluttish time.” Even the list of Nobel laureates in literature is filled with now-unfamiliar names.

"Yet a Scottish artist, Katie Paterson, has found a clever way around this humbling problem. “A forest has been planted in Norway,” Ms. Patersonexplains on the Future Library site, “which will supply paper for a special anthology of books to be printed in one hundred years’ time. Between now and then, one writer every year will contribute a text, with the writings held in trust, unpublished, until 2114.”

"Contributors would give up present-day acclaim — or feedback of any kind. In exchange, they would secure the attention of 22nd-century readers.

"Ms. Paterson has already chosen a time capsule for this unusual experiment: The Deichmanske public library in Oslo. And she already has her first contributor: the Canadian author Margaret Atwood, who is known for her speculative fiction.

"Although there is no guarantee that anyone will read Ms. Atwood’s contribution (or anyone else’s) in the next century, Ms. Paterson has increased the odds by laying the groundwork for a media event. Embargoes, like scarcity, can breed fascination.

"Mark Twain’s 100-year embargo on his autobiography (he said it shouldn’t be published until long after his death so he could speak his “whole frank mind”) did wonders for his 21st-century publisher. When the University of California Press released the first volume in 2010, it shot up the Amazon sales rankings and generated so much coverage that The Onion weighed in with a parody in which Twain shows his prescience about YouTube and the Afghan war.

"The project coordinators seem to have thought of everything, going so far as to equip the Deichmanske library with a printing press. If humanity loses the ability to print books, that’s covered. Of course, if humanity should lose the ability to read, that’s another story."

View Map + Bookmark Entry

"The Hidden Cost of E-Books at University Libraries" September 29, 2014

On September 29, 2014 Peter C. Herman, of the Department of English and Comparative Literature at San Diego State University, published an article entitled "The Hidden Costs of E-books at University Libraries" in TimesofSanDiego.com. It was widely understood at the time that economic forces were driving libraries in the California State University and University of California systems to replace physical books with e-books (ebooks) or digital books when budgets for university libraries were being cut. It was also understood that downloading an ebook was more convenient and more efficient than having to borrow a physical book from a physical library. What was not widely understood was that substituting e-books for physical books involved other costs, both financial and qualitative with respect to the reading and educational experience:

"For the past few years, both the California State University and the University of California libraries have been experimenting with packages that replace paper books with e-books. The advantages are obvious. With e-books, you no longer have to schlep to a library to take out a book. You just log on from whatever device connects you to the web, at whatever time and in whatever state of dress, and voila! the book appears on your screen.

"But the real attraction is price. Library budgets, along with university budgets, have been slashed, and such companies as Pearson and Elsevier offer e-book packages that make it possible to gain access (I’ll explain the awkward syntax in a moment) to lots of books at what seems like a minimal cost. The savings are multiplied when the package serves the entire system. So instead of each campus buying a paper book, all 23 CSU’s, for instance, share a single e-book. That’s the theory, at least. The reality is very different.

"In ancient days of yore, a library bought a book from either the publisher or a vendor, and then did with it whatever it wanted. Patrons could borrow the book, read it at leisure, renew it, or copy excerpts. Libraries shared books they didn’t own through interlibrary loan. But that’s not how e-books operate.

"Instead, a library pays to access a data file by one of two routes: “PDA,” or “Patron-Driven Acquisition,” in which a vendor makes available a variety of e-books, and a certain number of “uses” (the definition varies) triggers a purchase, or a subscription to an e-library that does not involve any mechanism for buying the e-book. Both avenues come loaded with all sorts of problems.

"First, reading an e-book is a different, and lesser, experience that reading a paper book, just as watching a movie at home differs from watching one in a theatre.

"There’s a huge difference between casual and college reading, and recent studies prove beyond doubt that while e-books are perfectly fine for the latest John Grisham or Fifty Shades of Grey, they actively discourage intense reading and deep learning.

"For example, a 2007 study concluded that “screen-based reading can dull comprehension because it is more mentally taxing and even physically tiring than reading on paper.” And a 2005 study by a professor at San Jose State University proved that online reading encourages skimming while discouraging in-depth or concentrated reading.

"The solution might be to print out the chapters you want to read. But e-book packages intentionally make that as difficult as possible.

"Paper books have no limitations since the library owns the book. But as Clifford Lynch recently put it, “nobody buys an e-book: one licenses it under typically very complex terms that constrain what you are allowed to do with it.” For example, at UCSD, Ebrary (now owned by Proquest), limits e-books to one user at a time, allows users to save a maximum of 30 percent of a book, “though some publishers have set more restrictive limits,” and allows you to copy only 15 percent of a book, text only, no illustrations.

"At SDSU, Ebrary also limits the number of pages you can download. The amount varies by publisher. One book allows up to 89 pages, but with another, Victoria Kahn’s The Crisis of Political Obligation in England, an especially complex work with very long chapters, you get only 19 pages, and the printout comes defaced with a code plastered across the page.  There’s also a limit to how many pages you can download per session, and the total is not large. I downloaded less than 20 pages before I exceeded my quota.

"E-books also do not circulate beyond the institution, which effectively kills interlibrary loan. As for one book serving the entire CSU or UC systems, many come with one-user restrictions, which means that only one user at a time in the CSU or the UC can read the book. Of course, Ebrary might say that the publisher imposes these restrictions. And that’s the point:publishers do not impose restrictions on paper books. E-book packages also compromise the stability of the library’s collection since the vendor can remove one at their discretion, without notice. So one day you can access a book, the next day, it has disappeared.

"E-books prevent deep reading, their use is highly restricted, and they can vanish without notice, so why are the CSU and the UC libraries experimenting with replacing paper with computer files? Is the e-book phenomenon yet another example of university administrators chasing after the latest e-fad? Like MOOCs (which even Sebastian Thrun of Udacity called “a lousy product”), e-books trade something that works for something that doesn’t, and even worse, threaten to destroy the very notion of a library. What’s the attraction?

"The answer is that e-books seem like a cheap way to access hundreds, if not thousands, of expensive books essential for research and teaching. Right now, the subscription packages Proquest and Ebsco offer may sound like they cost a lot (between $500-$800,000 a year), but the price is “extremely low relative to the number of books acquired,” to quote the CSU report on the e-book pilot project.  The average cost per book for Ebrary’s package is between $5 and $9, a spectacular savings given that the average price for a hardcover scholarly book in the humanities is around $100, and many are much more expensive.

"Then again, payday loans also seem like a cheap way to deal with, shall we say, a period of financial embarrassment. But the long-term costs of these loans can be ruinous, and the same goes of e-journal article packages. In the beginning they too were priced “extremely low relative to the number” of journals acquired.  But they did not stay “extremely low” for long. Today, the exorbitant amounts such companies as Elsevier and Springer charge eat up a greater and greater percentage of library budgets, and their contracts usually last for three to five years with built-in increases of 6 percent per year, well above inflation.

"Lured by the initial low price and the promise of convenience, university libraries are now trapped, since they cannot risk losing access to all the major journals.  As prices rise and budgets either stay the same or drop, a greater and greater percentage goes toward servicing the package journal subscription, less and less toward staffing, hours, and the like.

"The same thing will happen with e-book packages. In the past, once the library purchased the book, that was the end of the transaction. The library didn’t have to keep sending the publisher money to keep the book in circulation. No matter what happened, no matter how great the budget cut, the book stayed in the library, because the library owned it.

"But that is not the case with an e-book subscription. Right now, prices seem entirely reasonable, but once a library or a library system gets hooked, then they must continually pay the rising subscription fee or else a huge number of books will just disappear. With a traditional book, the costs end once the purchase is complete. But with e-book packages, the costs never end. They just keep rising.

"Even worse, by replacing paper books with e-book packages, university libraries will have outsourced the collection of knowledge to multinational, private corporations whose primary goal is not advancing knowledge, but profits. E-book packages are another step in transforming libraries from centers of scholarship, teaching and research into cash cows for Proquest’s bottom line.

"Why would libraries even consider such a Faustian deal? Simple: they are trying to make the best of a very bad situation. University budgets have in no way recovered from the financial crash, which reduced funding by two billion dollars. True, some money has been restored, but the CSU’s budget now matches what we had in 2007, and we have to teach 90,000 more students. If e-book packages sound like a poor idea, then the answer is to restore higher education funding to a level where we don’t have to make such terrible decisions." 

View Map + Bookmark Entry

"How Edward Snowden Changed Journalism" October 21, 2014

On October 21, 2014 Steve Coll, dean of the Tow Center for Digital Journalism at Columbia University's Graduate School of Journalism, published an article entitled "How Edward Snowden Changed Journalism" in The New Yorker, from which I quote:

". . . . one of the least remarked upon aspects of the [Edward] Snowden matter is that he has influenced journalistic practice for the better by his example as a source. Famously, when Snowden first contacted [Glenn] Greenwald, he insisted that the columnist communicate only through encrypted channels. Greenwald couldn’t be bothered. Only later, when [Laura] Poitras told Greenwald that he should take the trouble, did Snowden take him on as an interlocutor.

"It had been evident for some time before Snowden surfaced that best practices in investigative reporting and source protection needed to change—in large part, because of the migration of journalism (and so many other aspects of life) into digital channels. The third reporter Snowden supplied with National Security Agency files, Barton Gellman, of the Washington Post, was well known in his newsroom as an early adopter of encryption. But it has been a difficult evolution, for a number of reasons.

"Reporters communicate copiously; encryption makes that habit more cumbersome. Most reporters don’t have the technical skills to make decisions on their own about what practices are effective and efficient. Training is improving (the Tow Center for Digital Journalism, at Columbia Journalism School, where I serve as dean, offers a useful place to start), but the same digital revolution that gave rise to surveillance and sources like Snowden also disrupted incumbent newspapers and undermined their business models. Training budgets shrank. In such an unstable economic and audience environment, source protection and the integrity of independent reporting fell on some newsrooms’ priority lists.

"Snowden has now provided a highly visible example of how, in a very high-stakes situation, encryption can, at a minimum, create time and space for independent journalistic decision-making about what to publish and why. Snowden did not ask to have his identity protected for more than a few days—he seemed to think it wouldn’t work for longer than that, and he also seemed to want to reveal himself to the public. Yet the steps he took to protect his data and his communications with journalists made it possible for the Guardian and the Post to publish their initial stories and bring Snowden to global attention.

"It took an inside expert with his life and liberty at stake to prove how much encryption and related security measures matter. 'There was no risk of compromise,' Snowden told the Guardian, referring to how he managed his source relationship with Poitras and the others before their meeting in Hong Kong. 'I could have been screwed,'but his encryption and other data-security practices insured that it 'wasn’t possible at all' to intercept his transmissions to journalists “'unless the journalist intentionally passed this to the government' "

View Map + Bookmark Entry

ISIS Burns the Manuscripts and Paintings at Mar Behnam Monastery October 21, 2014

On October 21, 2014 I learned from aleteia.org that Father Charbel Issa, head of the ancient Mar Behnam Monastery, also known as the Monastery of the Martyrs Saint Behnam and his Sister Sarah near Bakhdida in northern Iraq, reported that members of ISIS ripped down the monastery's crosses and burned its paintings and manuscripts. According to BBC.com on July 21, 2014 ISIS seized control of the monastery, and threatened the monks with execution. They were expelled with nothing but the clothes on their backs.

"Issa added in a statement to the website Ankawa.com that he had contacted the former mayor of al-Basatliyah, where the monastery is located, in order to ascertain conditions at the monastery. The mayor confirmed that members of ISIS had taken the crosses off the monastery’s roof and burned important manuscripts. Furthermore, ISIS wrote the words “Property of ISIS” on a large portion of its exterior wall."  

View Map + Bookmark Entry

"Political Polarization and Media Habits. . . How Liberals and Conservatives Keep Up With Politics" October 21, 2014

On October 21, 2014 the Pew Research Center, a non-partisan think tank based in Washington, D.C., issued a report entitled Political Polarization & Media Habits. From Fox News to Facebook, How Liberals and Conservatives Keep Up with Politics.

From the Overview of this report I quote:

When it comes to getting news about politics and government, liberals and conservatives inhabit different worlds. There is little overlap in the news sources they turn to and trust. And whether discussing politics online or with friends, they are more likely than others to interact with like-minded individuals, according to a new Pew Research Center study.

The project – part of a year-long effort to shed light on political polarization in America – looks at the ways people get information about government and politics in three different settings: the news media, social media and the way people talk about politics with friends and family. In all three areas, the study finds that those with the most consistent ideological views on the left and right have information streams that are distinct from those of individuals with more mixed political views – and very distinct from each other.

These cleavages can be overstated. The study also suggests that in America today, it is virtually impossible to live in an ideological bubble. Most Americans rely on an array of outlets – with varying audience profiles – for political news. And many consistent conservatives and liberals heard dissenting political views in their everyday lives.

Yet as our major report on political polarizationfound, those at both the left and right ends of the spectrum, who together comprise about 20% of the public overall, have a greater impact on the political process than do those with more mixed ideological views. They are the most likely to vote, donate to campaigns and participate directly in politics. The five ideological groups in this analysis (consistent liberals, mostly liberals, mixed, mostly conservatives and consistenconservatives) are based on responses to 10 questions about a range of political values. That those who express consistently conservative or consistently liberal opinions have different ways of informing themselves about politics and government is not suprirsing. But the depth of these divisions - and the differences between those who have strong ideological views and those who do not - are striking.

Overall, the study finds that consistent conservatives:

 Are tightly clustered around a single news source, far more than any other group in the survey, with 47% citing Fox News as their main source for news about government and politics.

 Express greater distrust than trust of 24 of the 36 news sources measured in the survey. At the same time, fully 88% of consistent conservatives trust Fox News.

 Are, when on Facebook, more likely than those in other ideological groups to hear political opinions that are in line with their own views.

 Are more likely to have friends who share their own political views. Two-thirds (66%) say most of their close friends share their views on government and politics.

By contrast, those with consistently liberal views:

 Are less unified in their media loyalty; they rely on a greater range of news outlets, including some – like NPR and the New York Times– that others use far less.

 Express more trust than distrust of 28 of the 36 news outlets in the survey. NPR, PBS and the BBC are the most trusted news sources for consistent liberals.

 Are more likely than those in other ideological groups to block or “defriend” someone on a social network – as well as to end a personal friendship – because of politics.

 Are more likely to follow issue-based groups, rather than political parties or candidates, in their Facebook feeds.

Those with down-the-line conservative and liberal views do share some common ground; they are much more likely than others to closely follow government and political news. This carries over to their discussions of politics and government. Nearly four-in-ten consistent conservatives (39%) and 30% of consistent liberals tend to drive political discussions – that is, they talk about politics often, say others tend to turn to them for information rather than the reverse, and describe themselves as leaders rather than listeners in these kinds of conversations. Among those with mixed ideological views, just 12% play a similar role.

It is important to note, though, that those at either end of the ideological spectrum are not isolated from dissenting views about politics. Nearly half (47%) of across-the-board conservatives – and 59% of across-the-board liberals – say they at least sometimes disagree with one of their closest political discussion partners."

View Map + Bookmark Entry

ISIS Closes Mosul University Departments and Bans Text Books October 25, 2014

On October 25, 2014 The Times of London reported that ISIS closed several academic departments at the University of Mosul, severely restricting subjects that could be taught. Earlier it was reported that ISIS had closed the School of Library and Information Studies at the university. Mosul University was the second largest university in Iraq and one of the largest educational and research centers in the Middle East

Here is the text of the article by Tom Coghlan:

"Isis bans text books in sharia campus clampdown

"Published at 12:01AM, October 25 2014

"People living under Islamic State rule have been banned from owning academic books as the jihadists launch a crackdown on learning that diverges from their world view.

"To herald the start of the academic year this week, Islamic State (Isis) closed university departments of law, political science, fine art, archaeology, sports education, philosophy, tourism and hotel management.

"Activists in Mosul in Iraq and Raqqa in Syria, both controlled by Isis, also halted further teaching of 'democracy, cultural education, human rights and law (general courses)' for what it called 'the public good'.

"Teachers were told that they must have training in Sharia, as interpreted by Isis, and that exams should avoid certain subjects.

"There is a ban on 'forged historical events' — a term for the teaching of evolution and Darwinian principles — and on exam questions on patriotism, education and ethnicity 'which do not conform to Sharia law'. Isis’s rejection of the idea of nation states is reflected in a ban on “un-Islamic geographic decisions”. Teaching staff were also warned of punishments if 'legitimate standards of isolating males from females' were not followed.

"The radically altered curriculum, reminiscent of Pol Pot’s Year Zero edict in Cambodia in the 1970s, follows the disbandment by Isis of the ministry of higher education and imposition of its own 'chamber of education'.

"An activist inside Raqqa, contacted via the internet, said that many educated families were trying to avoid the bans by using paid private tuition in their homes.

"The source said that this was also under scrutiny from Isis, with a death sentence threatened if men were found to be teaching women. Searches were carried out for illegal books.

" 'I have books of philosophy and history,' said the source, who uses the nickname Abu Wart. He said that another family member had philosophy books, including works by Socrates. 'They are hidden,' he said. Books were removed and families warned if they were caught.

"Isis has sought to attract jihadists with technical qualifications to its flag, acknowledging that it needs skilled professionals and bureaucrats to run its self-styled caliphate. Andre Poulin, a Canadian jihadist killed in Syria last year, issued a video exhortation to potential recruits, promising a 'high station in the next life' for those with professional rather than fighting skills who joined Isis.

“We need engineers, we need doctors, we need professionals, every person can contribute something,' he said.

"Students from the University of Mosul were last week allowed to travel outside Isis-controlled areas to take final year exams in Iraqi Kurdistan in subjects deemed legitimate."

View Map + Bookmark Entry

Facebook's "News Feed" Drives Up to 20% of Traffic to News Sites October 26, 2014

On October 26, 2014 Ravi Somaiya published an article in The New York Times entitled "How Facebook is Changing the Way its Users Consume Journalism."  A caption to an image in the article stated that "30 percent of adults in America get news from the social network." From the article I quote the first third:

"MENLO PARK, Calif. — Many of the people who read this article will do so because Greg Marra, 26, a Facebook engineer, calculated that it was the kind of thing they might enjoy.

"Mr. Marra’s team designs the code that drives Facebook’s News Feed — the stream of updates, photographs, videos and stories that users see. He is also fast becoming one of the most influential people in the news business.

"Facebook now has a fifth of the world — about 1.3 billion people — logging on at least monthly. It drives up to 20 percent of traffic to news sites, according to figures from the analytics company SimpleReach. On mobile devices, the fastest-growing source of readers, the percentage is even higher, SimpleReach says, and continues to increase.

View Map + Bookmark Entry

Three Breakthroughs that Finally Unleased AI on the World October 27, 2014

In "The Three Breakthroughs That Have Finally Unleased AI on the World", Wired Magazine, October 27, 2014, writer Kevin Kelly of Pacifica, California explained how breakthroughs in cheap parallel computation, big data, and better algorithms were enabling new AI-based services that were previously the domain of sci-fi and academic white papers. Within the near future AI would play greater and greater roles in aspects of everyday life, in products like Watson developed by IBM, and products from Google, Facebook and other companies. More significant than these observations were Kelly's views about the impact that these developments would have on our lives and how we may understand the difference between machine and human intelligence:

"If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers. Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists.

"In fact, this won't really be intelligence, at least not as we've come to think of it. Indeed, intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them—and our most premium AI services will likely be advertised as consciousness-free.

"What we want instead of intelligence is artificial smartness. Unlike general intelligence, smartness is focused, measurable, specific. It also can think in ways completely different from human cognition. A cute example of this nonhuman thinking is a cool stunt that was performed at the South by Southwest festival in Austin, Texas, in March of this year. IBM researchers overlaid Watson with a culinary database comprising online recipes, USDA nutritional facts, and flavor research on what makes compounds taste pleasant. From this pile of data, Watson dreamed up novel dishes based on flavor profiles and patterns from existing dishes, and willing human chefs cooked them. One crowd favorite generated from Watson's mind was a tasty version of fish and chips using ceviche and fried plantains. For lunch at the IBM labs in Yorktown Heights I slurped down that one and another tasty Watson invention: Swiss/Thai asparagus quiche. Not bad! It's unlikely that either one would ever have occurred to humans.

"Nonhuman intelligence is not a bug, it's a feature. The chief virtue of AIs will be their alien intelligence. An AI will think about food differently than any chef, allowing us to think about food differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science and art. The alienness of artificial intelligence will become more valuable to us than its speed or power. . . .

View Map + Bookmark Entry

Google Develops A Neural Image Caption Generator to Translate Images into Words November 17, 2014

Having previously transformed the machine translation process by developing algorithms from vector space mathematics, in November 2014 Oriol Vinyals and colleagues at Google in Mountain View developed a neural image caption generator to translate images into words. Google's machine translation approach is:

"essentially to count how often words appear next to, or close to, other words and then define them in an abstract vector space in relation to each other. This allows every word to be represented by a vector in this space and sentences to be represented by combinations of vectors.

"Google goes on to make an important assumption. This is that specific words have the same relationship to each other regardless of the language. For example, the vector “king - man + woman = queen” should hold true in all languages. . . .

"Now Oriol Vinyals and pals at Google are using a similar approach to translate images into words. Their technique is to use a neural network to study a dataset of 100,000 images and their captions and so learn how to classify the content of images.

"But instead of producing a set of words that describe the image, their algorithm produces a vector that represents the relationship between the words. This vector can then be plugged into Google’s existing translation algorithm to produce a caption in English, or indeed in any other language. In effect, Google’s machine learning approach has learnt to “translate” images into words.

To test the efficacy of this approach, they used human evaluators recruited from Amazon’s Mechanical Turk to rate captions generated automatically in this way along with those generated by other automated approaches and by humans.

"The results show that the new system, which Google calls Neural Image Caption, fares well. Using a well known dataset of images called PASCAL, Neural image Capture clearly outperformed other automated approaches. “NIC yielded a BLEU score of 59, to be compared to the current state-of-the-art of 25, while human performance reaches 69,” says Vinyals and co" (http://www.technologyreview.com/view/532886/how-google-translates-pictures-into-words-using-vector-space-mathematics/, accessed 01-14-2015).

Vinyals et al, "Show and Tell: A Neural Image Captional Generator" (2014) http://arxiv.org/pdf/1411.4555v1.pdf

"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In thispaper we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used
to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify
both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU score improvements on Flickr30k, from 55 to 66, and on SBU, from 19 to 27" (Abstract).

View Map + Bookmark Entry

eBooks May Track Reading Habits December 10, 2014

Historically solitude was an essential component of reading, with many children becoming readers in part to enjoy the privacy it offers. By 2104 when we had become accustomed to the possibility that emails and Facebook posts could be surveiled by government security agencies, reading physical books continued to maintain that privacy. However, it became evident that reading eBooks was a less private experience, at least from the marketing point of view. On December 10, 2014 Alison Flood published an article in The Guardian entitled "Ebooks can tell which novels you didn't finish." in which she reported that ebook retailers could tell which books were finished or not finished, how fast they were read, and precisely where readers stopped reading a particular ebook and moved on to something else. According to her article, only 44.4 percent of British readers who used a Kobo eReader made it all the way through Donna Tartt’s international bestselling Pulitizer Prize winning novel The Goldfinch, while a mere 28.2 percent reached the end of Solomon Northup’s Twelve Years a Slave (1853), from which the Oscar winning film was adapted. Yet both these books appeared—and remained for some time—on the British bestseller lists.

 “After collecting data between January and November 2014 from more than 21m[illion] users, in countries including Canada, the US, the UK, France, Italy and the Netherlands, Kobo found that its most completed book of 2014 in the UK was not a Man Booker or Baileys prize winner. Instead, readers were most keen to finish Casey Kelleher’s self-published thriller Rotten to the Core, which doesn’t even feature on the overall bestseller list…Kobo also revealed that the people of Britain were most likely to finish a romance novel, with 62% completion, followed by crime and thrillers (61%) and fantasy (60%). Italians were also most engaged by romance (74% completion), while the French preferred mysteries, with 70% completion.”

“A book’s position on the bestseller list may indicate it’s bought, but that isn’t the same as it being read or finished,” said Michael Tamblyn, president and chief content officer at Kobo. “A lot of readers have multiple novels on the go at any given time, which means they may not always read one book from start to finish before jumping into the next great story. People may wait days, months, or even until the following year to finish certain titles. And many exercise that inalienable reader’s right to set down a book if it doesn’t hold their interest.”

View Map + Bookmark Entry

Skype Previews Skype Translator Real Time Translation Program December 15, 2014

On December 15, 2014 Skype, a division of Microsoft, announced the first phase of the Skype Translator preview program. In the first phase the program was available only for English and Spanish. The result of 15 years of work at Microsoft Research in deep learning, transfer learning, speech recognition, machine translation, and speech synthesis, Microsoft demonstrated this technology at the Code Conference, and posted videos of the demonstration on its blog on May 27, 2014: 

View Map + Bookmark Entry

Amazon's Research in the Reading Process & its Introduction of the Ability with its E-Readers to Switch Back and Forth Between the Written and Audible Versions of a Book December 17, 2014

On December 17, 2014 The Verge.com issued "The Everything Book: Reading in the Age of Amazon" by Casey Newton. This excellent article discussed Amazon.com's product development in e-readers and audible books.

Among the take-away ideas from the article that particularly caught my attention, it is likely that at its Lab126 in Sunnyvale, CA, Amazon has spent more time studying the physical act of reading than any company before it:

" 'When you're reading, you want to fall down the rabbit hole' . . . Amazon has actually built a rabbit hole, of sorts: a reading room somewhere at Lab126, stuffed with comfortable chairs, where pinhole cameras study the way people really read. (Because test subjects are in there using prototype devices, I am not allowed inside.)

"It’s in this room that Amazon learned people switch hands on a book roughly every two minutes, even though in surveys they claimed not to. (This is why the Voyage has identical page-turn buttons on both left and right.) The Voyage’s page-forward button is much bigger than page-back, because Amazon’s data showed 80 percent of all page flips are forward. As Green describes research like this, it seems likely that Amazon has spent more time studying the physical act of reading than any company before it.

Amazon's Audible division, headquartered in Newark, NJ,  had a catalogue of more than 180,000 audiobooks in December 2014, with countless more in rapid development. In 2012 Amazon introduced a feature that let you switch back forth easily between the written and audio versions of a book. In December 2014 there were 55,000 books which could be read on an Amazon Kindle in this way.

View Map + Bookmark Entry

The First "Professional" Film Festival Film Shot on an iPhone January 2015

"So how do you make a Sundance movie for iPhone? You need four things. First, of course, the iPhone (Baker and his team used three). Second, an $8 app called Filmic Pro that allowed the filmmakers fine-grained control over the focus, aperture, and color temperature. Third, a Steadicam. 'These phones, because they’re so light, and they’re so small, a human hand — no matter how stable you are — it will shake. And it won’t look good,' says Baker. 'So you needed the Steadicam rig to stabilize it.'

"The final ingredient was a set of anamorphic adapter lenses that attach to the iPhone. The lenses were prototypes from Moondog Labs, and Baker said they were essential to making Tangerine look like it belonged on a big screen. 'To tell you the truth, I wouldn’t have even made the movie without it,' Baker says. 'It truly elevated it to a cinematic level.'

"Like any conventional film,Tangerine underwent post-production. 'With a lot of these social realist films, the first thing you do is drain the color,' Baker says." 

View Map + Bookmark Entry

A Computer Masters Heads-up Limit Texas Hold 'em Poker January 8, 2015

A breakthrough in artificial intelligence published in January 2015 allowed a computer to master the simplest two-person version of the poker game known as Texas Hold'em working through every possible variation of play to make the perfect move every time. When performed without mistakes, just like the childhood game tic-tac-toe, there’s no way to lose. In this case the player is Cepheus, an algorithm designed by Canadian researchers.

We have a strategy that can guarantee a player won’t lose,” said Michael Bowling, a computer scientist from the University of Alberta, who led a team working on the program. “It’s going to be a break-even game. It’s only when someone makes a mistake that they could end up losing.

Michael BowlingNeil BurchMichael JohansonOskari Tammelin, "Heads-up limit hold'em poker is solved," Science 347, no. 6218 (2015) 145-149 

"Poker is a family of games that exhibit imperfect information, where players do not have full knowledge of past events. Whereas many perfect-information games have been solved (e.g., Connect Four and checkers), no nontrivial imperfect-information game played competitively by humans has previously been solved. Here, we announce that heads-up limit Texas hold’em is now essentially weakly solved. Furthermore, this computation formally proves the common wisdom that the dealer in the game holds a substantial advantage. This result was enabled by a new algorithm, CFR, which is capable of solving extensive-form games orders of magnitude larger than previously possible" (Abstract).

See also: http://news.sciencemag.org/math/2015/01/texas-hold-em-poker-solved-computer, accessed 01-14-2015.

View Map + Bookmark Entry

The Role of Technology in the Increased Violence Against Journalists January 9, 2015

On January 9, 2015 The Los Angeles Times published an op-ed piece by Joel Simon, executive director of the Committee to Protect Journalists, entitled "Technology's role in the increased violence against journalists." This I quote in full. As often, the links are my additions:

"The murderous attack on the office of the satirical weekly Charlie Hebdo in Paris last week can be seen in the context of modern French society: its challenges assimilating immigrants, its ongoing efforts to preserve its liberal and secular political culture, and even its national affinity for a kind of scathing and irreverent cartooning rooted in a deep distrust of institutions.

"But the attack has a global dimension as it also can be seen as the latest skirmish in a war over freedom of expression. This war has led to a record number of journalists being killed and imprisoned around the world. The last three years have been the most deadly and dangerous ever documented by the Committee to Protect Journalists, which has been keeping detailed data since 1992.

"The advent of the Internet has completely transformed the way news is gathered and disseminated to the global audience. This new system has tremendous positive advantages, allowing news to flow more easily across borders and making it more difficult for repressive governments to censor and control it. But there are also profound implications for the safety of journalists on the front lines of these information battles.

"One way to think about the change is to consider that not that long ago journalists venturing into conflict zones often chose to identify themselves, painting the word “press” on their cars or flak jackets. Journalists were safer because they collectively exercised an information monopoly and this made them useful to the warring parties who needed the media to communicate with the world.

"But today many violent groups, such as the Islamic State militants in Syria and the drug cartels in Mexico, rely on the Internet and social media to achieve the same ends and, of course, they are better able to control the message. Journalists are seen as dispensable — more useful as hostages or props in elaborately staged execution videos. In this context, identifying yourself as a journalist makes you a target.

"Moreover, because of new enabling technology and cutbacks in the news industry, a growing portion of frontline news gathering today is accomplished by local journalists and freelancers, who inform their own countries and the world. These journalists are more vulnerable because they often work without institutional support.

"In fact, the vast majority of journalists killed around the world do not die covering combat. They are deliberately targeted in their own countries because of the stories they cover or the ideas they express. In this context, the attack on Charlie Hebdo is typical of the risk that journalists face everywhere. What made it shocking was that it took place not in Mexico or Pakistan, but in France.

"Technology has also changed the global media environment by opening every corner of the world to myriad ideas and information. This too has its consequences.

"In 1948, when the Universal Declaration of Human Rights was adopted by the United Nations, it guaranteed the right to seek and receive information “regardless of frontiers.” That phrase — regardless of frontiers — is unique in international human rights law because it makes freedom of expression explicitly transnational. When the language was ratified, the concept was purely notional. Today, the Internet has made it real.

"In other words, the Internet has brought liberal, Western ideas of freedom of expression into direct conflict with 19th century notions of sovereignty and more traditional societies that place enormous value on personal honor and the sanctity of religious symbols.

"For example, the Chinese leadership, while embracing connectivity for its citizens, views the Internet as a Trojan horse that can be used to channel dangerous ideas from outside the country, ideas that erode the power of the Communist Party. Turkey's President Recep Tayyip Erdogan recently told a delegation from the Committee to Protect Journalists that he “is increasingly against the Internet every day.” Russia also has been cracking down on online speech. All these governments distrust the Internet and are seeking to exercise greater control over electronic communication within their borders.

"Leaders of some Muslim countries have a different but related argument. They are deeply concerned by images they deem to be blasphemous or shocking to religious sensibilities, and which are being imposed on them by a global information system that serves the interests of Western governments and international technology companies.

"The attack on Charlie Hebdo responds to this dynamic. While the magazine has sought to shock and offend in a French context, its cartoons traveled around the world, angering religious Muslims in many more conservative societies and providing a rallying cry for Al Qaeda, which put the paper's editors at the top of its hit list.

"One can acknowledge the anger and upset of those who see their fundamental religious beliefs mocked while also affirming that we must redouble our efforts to defend freedom of expression around the world. Freedom of expression is not only a fundamental human right; in the Internet era, information is a shared global resource that must be available equally to all.

"A global battle for freedom of expression is upon us, and the casualties are mounting. The attack on the journalists of Charlie Hebdo shows us there is no safe haven."

View Map + Bookmark Entry

"Iraqi Libraries Ransacked by Islamic State in Mosul" January 31, 2015

This I quote from The Associated Press online publication dated January 31, 2015. The link at the end of the quotation is my addition:

Iraqi libraries ransacked by Islamic State group in Mosul


"BAGHDAD (AP) -- When Islamic State group militants invaded the Central Library of Mosul earlier this month, they were on a mission to destroy a familiar enemy: other people's ideas.

Residents say the extremists smashed the locks that had protected the biggest repository of learning in the northern Iraq town, and loaded around 2,000 books - including children's stories, poetry, philosophy and tomes on sports, health, culture and science - into six pickup trucks. They left only Islamic texts.

The rest?

"These books promote infidelity and call for disobeying Allah. So they will be burned," a bearded militant in traditional Afghani two-piece clothing told residents, according to one man living nearby who spoke to The Associated Press. The man, who spoke on condition of anonymity because he feared retaliation, said the Islamic State group official made his impromptu address as others stuffed books into empty flour bags.

Since the Islamic State group seized a third of Iraq and neighboring Syria, they have sought to purge society of everything that doesn't conform to their violent interpretation of Islam. They already have destroyed many archaeological relics, deeming them pagan, and even Islamic sites considered idolatrous. Increasingly books are in the firing line.

Mosul, the biggest city in the Islamic State group's self-declared caliphate, boasts a relatively educated, diverse population that seeks to preserve its heritage sites and libraries. In the chaos that followed the U.S.-led invasion of 2003 that toppled Saddam Hussein, residents near the Central Library hid some of its centuries-old manuscripts in their own homes to prevent their theft or destruction by looters.

But this time, the Islamic State group has made the penalty for such actions death. Presumed destroyed are the Central Library's collection of Iraqi newspapers dating to the early 20th century, maps and books from the Ottoman Empire and book collections contributed by around 100 of Mosul's establishment families.

Days after the Central Library's ransacking, militants broke into University of Mosul's library. They made a bonfire out of hundreds of books on science and culture, destroying them in front of students.

A University of Mosul history professor, who spoke on condition he not be named because of his fear of the Islamic State group, said the extremists started wrecking the collections of other public libraries last month. He reported particularly heavy damage to the archives of a Sunni Muslim library, the library of the 265-year-old Latin Church and Monastery of the Dominican Fathers and the Mosul Museum Library with works dating back to 5000 BC.

Citing reports by the locals who live near these libraries, the professor added that the militants used to come during the night and carry the materials in refrigerated trucks with Syria-registered license plates. The fate of these old materials is still unknown, though the professor suggested some could be sold on the black market. In September, Iraqi and Syrian officials told the AP that the militants profited from the sale of ancient artifacts.

The professor said Islamic State group militants appeared determined to "change the face of this city ... by erasing its iconic buildings and history."

Since routing government forces and seizing Mosul last summer, the Islamic State group has destroyed dozens of historic sites, including the centuries-old Islamic mosque shrines of the prophets Seth, Jirjis and Jonah.

An Iraqi lawmaker, Hakim al-Zamili, said the Islamic State group "considers culture, civilization and science as their fierce enemies."

Al-Zamili, who leads the parliament's Security and Defense Committee, compared the Islamic State group to raiding medieval Mongols, who in 1258 ransacked Baghdad. Libraries' ancient collections of works on history, medicine and astronomy were dumped into the Tigris River, purportedly turning the waters black from running ink."

View Map + Bookmark Entry

The FCC Rules in Favor of Net Neutrality in the United States February 26 – June 12, 2015

On February 26, 2015, the U.S. Federal Communications Commission (FCC) ruled in favor of net neutrality in the United States by reclassifying broadband access as a telecommunications service and thus applying Title II (common carrier) of the Communications Act of 1934 to Internet service providers. Net neutrality (also czlled network neutrality, Internet neutrality, or net equality) means that Internet service providers (ISPs) and governments should treat all data on the Internet the same, and should not discriminate or charge differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. The term was coined by Columbia University media law professor Tim Wu in 2003, as an extension of the longstanding concept of a common carrier. The FCC released the specific details of its new net neutrality rule on March 12, 2015. One month later, on April 13, 2015, the FCC published the final rule on its new regulations; the rule took effect on June 12, 2015.

Prior to this ruling there was extensive debate over the issue of whether net neutrality should be required by law in the United States. Advocates of net neutrality raised concerns about the capability of broadband providers to use their last mile infrastructure to block Internet applications and content (i. e. websites, services, and protocols), and even to block out competitors. Opponents claimed net neutrality regulations would deter investment into improving broadband infrastructure.

View Map + Bookmark Entry

A Machine Vision Algorithm Learns to Attribute Paintings to Specific Artists May 2015

In May 2015 Babak Saleh and Ahmed Elgammal of the Department of Compuer Science, Rutgers University, described an algorithm that could recognize the Style, Genre, and Artist of a painting.

"Saleh and Elgammal begin with a database of images of more than 80,000 paintings by more than a 1,000 artists spanning 15 centuries. These paintings cover 27 different styles, each with more than 1,500 examples. The researchers also classify the works by genre, such as interior, cityscape, landscape, and so on.

"They then take a subset of the images and use them to train various kinds of state-of-the-art machine-learning algorithms to pick out certain features. These include general, low-level features such as the overall color, as well as more advanced features that describe the objects in the image, such as a horse and a cross. The end result is a vector-like description of each painting that contains 400 different dimensions.

"The researchers then test the algorithm on a set of paintings it has not yet seen. And the results are impressive. Their new approach can accurately identify the artist in over 60 percent of the paintings it sees and identify the style in 45 percent of them.

"But crucially, the machine-learning approach provides an insight into the nature of fine art that is otherwise hard even for humans to develop. This comes from analyzing the paintings that the algorithm finds difficult to classify.

"For example, Saleh and Elgammal say their new approach finds it hard to distinguish between works painted by Camille Pissarro and Claude Monet. But a little research on these artists quickly reveals both were active in France in the late 19th and early 20th centuries and that both attended the Académie Suisse in Paris. An expert might also know that Pissarro and Monet were good friends and shared many experiences that informed their art. So the fact that their work is similar is no surprise.

"As another example, the new approach confuses works by Claude Monet and the American impressionist Childe Hassam, who, it turns out, was strongly influenced by the French impressionists and Monet in particular.  These are links that might take a human some time to discover" (MIT Technology Review May 11, 2015).

Saleh, Babak, and Elgammal, Ahmed," Large-scale Classification of Fine-Art Paintings; Learning the Right Metric on the Right Feature" (http://arxiv.org/pdf/1505.00855v1.pdf, 5 May 2015.

View Map + Bookmark Entry

The New York Times Has Over One Million Paid Digital Subscribers August 6, 2015

On August 6, 2015 the New York Times Company announced that as of July 30 it had passed the one million paid digital-only subscriber mark, less than four-and-a-half years after launching its pay model. This number was in addition to its 1.1 million print-and-digital subscribers.

"In making the announcement, Mark Thompson, president and chief executive officer of The New York Times Company, said, 'This is a major milestone for our digital consumer business, which we launched in 2011 and has continued a strong and steady growth trajectory. It puts us in a unique position among global news providers. We believe that no other news organization has achieved digital subscriber numbers like ours or comparable digital subscription revenue. It’s a tribute to the hard work and innovation of our marketing, product and technology teams and the continued excellence of our journalism.' "

View Map + Bookmark Entry

OCLC Prints the Last Library Catalogue Cards October 1, 2015

On October 1, 2015 OCLC, headquartered in Dublin, Ohio, announced in a news release on its website that it printed its last library catalogue cards. Frankly, I was surprised that OCLC waited this long to discontinue what had been, for all intents and purposes, an obsolete practice for at least ten or twenty years. From their press release:

"OCLC began automated catalog card production in 1971, when the shared cataloging system first went online. Card production increased to its peak in 1985, when OCLC printed 131 million. At peak production, OCLC routinely shipped 8 tons of cards each week, or some 4,000 packages. Card production steadily decreased since then as more and more libraries began replacing their printed cards with electronic catalogs. OCLC has printed more than 1.9 billion catalog cards since 1971.

"Today, most libraries use online public access catalogs (OPACs) as part of an integrated library system, or a cloud-based library management system like OCLC's WorldShare Management Services, where the library catalog and services are hosted and maintained outside the library, in the cloud.

" 'We've already jumped into the new world,' said Nevine Haider, Head of Technical Services, Concordia College Library, in Bronxville, New York, whose catalog cards were among the last printed today. 'We’ve had online public access to our collection for years. The print card catalog has served as our back-up. So we’re ready to move on.'

"WorldCat represents a 'collective collection' of the world’s libraries. WorldCat connects library users to hundreds of millions of electronic resources, including e-books, licensed databases, online periodicals and collections of digital items. As the needs of libraries and their users expand, OCLC works with libraries to collect, manage and share new types of library data to ensure libraries are meeting the expectations of users.

" 'The vast majority of libraries discontinued their use of the printed library catalog card many years ago,' said Prichard. 'The printing of the last cards today is largely symbolic. But it is worth noting that these cards served libraries and their patrons well for generations, and they provided an important step in the continuing evolution of libraries and information science.' "  

View Map + Bookmark Entry

Amazon.com Opens its First Physical Bookshop, "Amazon Books", in Seattle November 2, 2015

On November 2, 2015 Amazon.com announced that it opened a physical bookstore in Seattle. 

"Amazon Books is a physical extension of Amazon.com. We’ve applied 20 years of online bookselling experience to build a store that integrates the benefits of offline and online book shopping. The books in our store are selected based on Amazon.com customer ratings, pre-orders, sales, popularity on Goodreads, and our curators’ assessments. These are fantastic books! Most have been rated 4 stars or above, and many are award winners.

"To give you more information as you browse, our books are face-out, and under each one is a review card with the Amazon.com customer rating and a review. You can read the opinions and assessments of Amazon.com’s book-loving customers to help you find great books.

"Prices at Amazon Books are the same as prices offered by Amazon.com, so you’ll never need to compare our online and in-store prices. Nevertheless, our mobile app is a great way to read additional customer reviews, get more detailed information about a product, or even to buy products online.

"Amazon Books is a store without walls – there are thousands of books available in store and millions more available at Amazon.com. Walk out of the store with a book; lighten your load and buy it online (Prime customers, of course, won’t pay for shipping); buy an eBook for your Kindle; or add a product to your Amazon Wish List, so someone else can buy it.

"At Amazon Books, you can also test drive Amazon’s devices. Products across our Kindle, Echo, Fire TV, and Fire Tablet series are available for you to explore, and Amazon device experts will be on hand to answer questions and to show the products in action.

"Tomorrow is literally Day One for Amazon Books, and we hope you will visit and share your ideas and feedback by dropping a card in our suggestion box or clicking the link at the bottom of the page.

"We are located at 4601 26th Ave. NE in University Village, next to Banana Republic and across from JOEY Kitchen. We are open Monday through Saturday from 9:30 a.m. to 9:00 p.m. and on Sundays from 11:00 a.m. to 6:00 p.m"

View Map + Bookmark Entry

The New York Times Introduces Virtual Reality into its Reporting November 5, 2015

On November 5, 2015 The New York Times introduced its NYT VR smartphone app. Using this app and a free or very inexpensive virtual reality view readers could experience in virtual reality "three portraits of children driven from their homes by war and persecution — an 11-year-old boy from eastern Ukraine named Oleg, a 12-year-old Syrian girl named Hana and a 9-year-old South Sudanese boy named Chuol."

"You can use the app on its own. But the experience is even better with a special virtual reality viewer. Thanks to a partnership with Google, we will be sending free Google Cardboard VR viewers to all domestic New York Times home delivery subscribers who receive the Sunday edition. You should receive your Google Cardboard with your Sunday newspaper by November 8, 2015.

"Times Insider subscribers who have chosen to receive marketing emails will also receive promotional codes via email that can be redeemed for free Cardboard viewers."

View Map + Bookmark Entry

Based on a Single Example, A. I. Surpasses Human Capabilities in Reading and Copying Written Characters December 11, 2015

On December 11, 2015 Brenden M. Lake, Rusian Salakhutdinov, and Joshua B. Tenenbaum,  from MIT and New York University reported advances in artificial intelligence that surprassed human capabilities in reading and copying written characters. The key advance was that the algorithm outperformed humans in identifying written characters based on a single example. Until this time, machine learning algorithms typically required tens or hundres of examples to perform with similar accuracy.

Lake, Salakutdinov, Tenebaum, "Human-level concept learning through probabilistic program induction", Science, 11 December 2015, 350, no. 6266, 1332-1338.  On December 14, 2015 the entire text of this extraordinary paper was freely available online. I quote the first 3 paragraphs:

"Despite remarkable advances in artificial intelligence and machine learning, two aspects of human conceptual knowledge have eluded machine systems. First, for most interesting kinds of natural and man-made categories, people can learn a new concept from just one or a handful of examples, whereas standard algorithms in machine learning require tens or hundreds of examples to perform similarly. For instance, people may only need to see one example of a novel two-wheeled vehicle (Fig. 1A) in order to grasp the boundaries of the new concept, and even children can make meaningful generalizations via “one-shot learning” (13). In contrast, many of the leading approaches in machine learning are also the most data-hungry, especially “deep learning” models that have achieved new levels of performance on object and speech recognition benchmarks (49). Second, people learn richer representations than machines do, even for simple concepts (Fig. 1B), using them for a wider range of functions, including (Fig. 1, ii) creating new exemplars (10), (Fig. 1, iii) parsing objects into parts and relations (11), and (Fig. 1, iv) creating new abstract categories of objects based on existing categories (1213). In contrast, the best machine classifiers do not perform these additional functions, which are rarely studied and usually require specialized algorithms. A central challenge is to explain these two aspects of human-level concept learning: How do people learn new concepts from just one or a few examples? And how do people learn such abstract, rich, and flexible representations? An even greater challenge arises when putting them together: How can learning succeed from such sparse data yet also produce such rich representations? For any theory of learning (41416), fitting a more complicated model requires more data, not less, in order to achieve some measure of good generalization, usually the difference in performance between new and old examples. Nonetheless, people seem to navigate this trade-off with remarkable agility, learning rich concepts that generalize well from sparse data.

"This paper introduces the Bayesian program learning (BPL) framework, capable of learning a large class of visual concepts from just a single example and generalizing in ways that are mostly indistinguishable from people. Concepts are represented as simple probabilistic programs— that is, probabilistic generative models expressed as structured procedures in an abstract description language (1718). Our framework brings together three key ideas—compositionality, causality, and learning to learn—that have been separately influential in cognitive science and machine learning over the past several decades (1922). As programs, rich concepts can be built “compositionally” from simpler primitives. Their probabilistic semantics handle noise and support creative generalizations in a procedural form that (unlike other probabilistic models) naturally captures the abstract “causal” structure of the real-world processes that produce examples of a category. Learning proceeds by constructing programs that best explain the observations under a Bayesian criterion, and the model “learns to learn” (2324) by developing hierarchical priors that allow previous experience with related concepts to ease learning of new concepts (2526). These priors represent a learned inductive bias (27) that abstracts the key regularities and dimensions of variation holding across both types of concepts and across instances (or tokens) of a concept in a given domain. In short, BPL can construct new programs by reusing the pieces of existing ones, capturing the causal and compositional properties of real-world generative processes operating on multiple scales.

I"n addition to developing the approach sketched above, we directly compared people, BPL, and other computational approaches on a set of five challenging concept learning tasks (Fig. 1B). The tasks use simple visual concepts from Omniglot, a data set we collected of multiple examples of 1623 handwritten characters from 50 writing systems (Fig. 2)(see acknowledgments). Both images and pen strokes were collected (see below) as detailed in section S1 of the online supplementary materials. Handwritten characters are well suited for comparing human and machine learning on a relatively even footing: They are both cognitively natural and often used as a benchmark for comparing learning algorithms. Whereas machine learning algorithms are typically evaluated after hundreds or thousands of training examples per class (5), we evaluated the tasks of classification, parsing (Fig. 1B, iii), and generation (Fig. 1B, ii) of new examples in their most challenging form: after just one example of a new concept. We also investigated more creative tasks that asked people and computational models to generate new concepts (Fig. 1B, iv). BPL was compared with three deep learning models, a classic pattern recognition algorithm, and various lesioned versions of the model—a breadth of comparisons that serve to isolate the role of each modeling ingredient (see section S4 for descriptions of alternative models). We compare with two varieties of deep convolutional networks (28), representative of the current leading approaches to object recognition (7), and a hierarchical deep (HD) model (29), a probabilistic model needed for our more generative tasks and specialized for one-shot learning."


View Map + Bookmark Entry