Academic journal article Journal of Biblical Literature

Computerized Source Criticism of Biblical Texts

Academic journal article Journal of Biblical Literature

Computerized Source Criticism of Biblical Texts

Article excerpt

(ProQuest: ... denotes non-US-ASCII text omitted.)

"We instruct the computer to ignore what we call grammatical words-articles, prepositions, pronouns, modal verbs, which have a high frequency rating in all discourse. Then we get to the real nitty-gritty, what we call the lexical words, the words that carry a distinctive semantic content. Words like love or dark or heart or God. Let's see." So he taps away on the keyboard and instantly my favourite word appears on the screen.

- David Lodge, Small World (1984)

In this article, we introduce a novel computerized method for source analysis of biblical texts. The matter of the Pentateuch's composition has been the subject of some controversy in modern times. From the late nineteenth century until recent years, the Documentary Hypothesis was the most prevalent model among Bible scholars. Since then, scholars have increasingly called into question the existence of some or all of the postulated documents. Many prefer a supplementary model to a documentary one, while others believe the text to be an amalgam of numerous fragments. The closest thing to a consensus today-and it too has its detractors-is that there exists a certain meaningful dichotomy between Priestly (P) and non-Priestly texts.1

The various source analyses that have been proposed to date are based on a combination of literary, historical, and linguistic evidence. Our research is a first attempt to put source analysis on as empirical a footing as possible by marshaling the most recent methods in computational linguistics. The strength of this approach lies in its "robotic" objectivity and rigor. Its weakness is that it is limited to certain linguistic features and does not take into account any literary or historical considerations.

Though this work does not address the question of editorial model, we do hope it might contribute to the fundamental issue of literary origins. For cases in which scholars have an idea how many primary components are present, our new algorithmic method can disentangle the text with a high degree of confidence.

The method is a variation on one traditionally employed by biblical scholars, namely, word preference. Synonym choice can be useful in identifying schools of authors, as well as individuals. Occurrences in the text of any one word from a set of synonyms are, however, relatively infrequent. Therefore, synonyms are useful for teasing out some textual units, but not all. Accordingly, we use a two-stage process. We first find a reliable partial source division based on synonym usage. (Only a preference of one term over its alternative is registered; the context in which it is used is ignored.) In the second stage, we analyze this initial division for more general lexical preferences and extrapolate from these to obtain a more complete and fine-grained source division.

As noted, the advantage of a numerical lexical approach is its fundamentally objective nature. While potentially valuable, literary observations and historical reconstructions are prone to controversy. For instance, a repetition may be viewed by one scholar as a telltale sign of multiple sources and by another as an intentional literary device. While our computerized method is objective and powerful, the narrow focus on lexical analysis can occasionally lead to anomalous assignments of provenance that elementary nonlexical considerations (ideology, narrative consistency, repetitions, continuity, etc.) would have precluded.

The algorithm is generic in that it can be applied to any collection of biblical texts (or, for that matter, to other corpora). Apart from the consonantal text, the only information used is Strong's Concordance, for the purpose of sense disambiguation and synonymy.2 No prior knowledge regarding authorship is required. Thus, we will confirm the overall effectiveness of our method by testing it on an artificial book, Jer-iel, constructed by randomly interweaving the books of Jeremiah and Ezekiel. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.