A: Brooklyn, New York, United States, B: Nashville, Tennessee, United States
In Italy Roberto Busa began his experimentation with computerized indexing of the text of Thomas Aquinas using IBM punch-card tabulators in 1949-51. The first significant product of computerized indexing in the humanities in the United States, and one of the earliest large examples of humanities computing or digital humanities anywhere, was the first computerized concordance of the Bible: Nelson's Complete Concordance to the Revised Standard Version Bible edited by J. W. Ellison and published in New York and Nashville, Tennessee in 1957. The book consists of 2157 large quarto pages printed in two columns in small type.
The Revised Standard Version of the Bible was completed in 1952, when the Univac was little-known. UNIVAC 1, serial one, was not actually delivered tihe U.S. Census Bureau until 1953, and the first UNIVAC delivered to a commercial customer was serial 8 in 1954. Using the UNIVAC to compile a concordance was highly innovative, and, of course, it substantially reduced compilation time, as Ellison wrote in his preface dated 1956. Though Ellison offered to make the program available he did not provide data concerning the actual time spent in inputting the data on punched cards and running the program:
"An exhaustive concordance of the Bible, such as that of James Strong, takes about a quarter of a century of careful, tedious work to guarantee accuracy. Few students would want to wait a generation for a CONCORDANCE of the REVISED STANDARD VERSION of the HOLY BIBLE. To distribute the work among a group of scholars would be to run the risk of fluctuating standards of accuracy and completeness. The use of mechanical or electronic assistance was feasible and at hand. The Univac I computer at the offices of Remington Rand, Inc. was selected for the task. Every means possible, both human and mechanical, was used to guarantee accuracy in the work.
"The use of a computer imposed certain limitations upon the Concordance. Although it could be 'exhaustive,' it could not be 'analytical'; the context and location of each and every word could be listed, but not the Hebrew and Greek words from which they were translated. For students requiring that information, the concordance of the Holy Bible in its original tongues or the analytical concordances of the King James Version must be consulted. . . .
"The problem of length of context was arbitrarily solved. A computer, at least in the present stage of engineering, can perform only the operations specified for it, but it will precisely and almost unerringly perform them. In previous concordances, each context was made up on the basis of a human judgment which took in untold familiarity with the text and almost unconscious decisions in g rouping words into familiar phrases. This kind of human judgement could not be performed by the computer; it required a set of definite invariable rules for its operation. The details of the program are available for those whose interest prompts them to ask for them."
The March 1956 issue of Publishers' Weekly, pp. 1274-78, in an article entitled "Editing at the Speed of Light," reported that Ellison's concordance deliberately omited 132 frequent words- articles, most conjuctions, adverbs, prepositions and common verbs.
"From an account in the periodical Systems it appears that the text of the Bible was transferred direct to magnetic tape, using a keyboard device called the Unityper (McCulley 1956). This work took nine months (800,000 words). The accuracy of the tapes was checked by punching the text a second time, on punched cards, then transferring this material to magenetic tape using a card-to-tape converter. The two sets of tapes were then compared for divergences by the computer and discrepancies eliminated. The computer putput medium was also magnetic tape and this operated a Uniprinter which produced the manuscrpt sheets ready for typesetting" (Hymes ed., The Use of Computers in Anthropology [1965] 225).