A Mathematical Model For The Process Of New Testament Scribal Copying

Introduction

The process of New Testament Textual Criticism rests upon a number of canons, or a priori assumptions about the past behaviors of Testament copyists. The value of extant manuscripts and the correct method of blending them are then debated in the utmost depth: but the value of those discussions is predicated upon the veracity of the underlying canons. Whilst there has been some work done challenging some of these core assumptions[1] most of these challenges have either been unconvincing or have striven to replace these core assumptions with alternate ones which are equally unsatisfactory.

In a recently published internet book[2] Andrew Wilson again seeks to challenge some of the existing canons based upon research he has done using the concept of the singleton; those places where only one manuscript differs from the others. Further Wilson attempts to formalize and quantify the effect of various other canons to produce a value equation so that each variant can be scientifically weighed before other internal evidence is considered.

It is my belief and the contention of this paper, that both the traditional canons and the recent challenges are overly simplified. Further, those attempts at statistical validation taken so far have used small samples in an attempt to reach sizeable conclusions and that the mathematics of small sample theory has therefore been against the researchers. By today's standards the New Testament is incredibly small - we have ample computing resource available to us to study every variant at the micro level without needing to involve sample theory.

Additionally I believe we need to construct a process model of the scribe, or scribes that have edited the documents over the years. Then using the facts that we can glean from the underlying statistical patterns within the texts we should be able to evolve our model to get an accurate picture of how errors have entered and why. As the model becomes more accurate it will be easier to reverse the process to derive the underlying autographs with maximum probability.

This paper is being written by an experienced mathematician and process theorist who is dabbling in textual criticism, further it is being written as part of an MDiv program. There is thus an obvious danger that it will skip enough simple mathematics to confuse the lay-man, skip enough complex mathematics to annoy a mathematician and show enough naivety in the field of textual criticism to draw the ire of the theologian. Notwithstanding, I have not seen another comprehensive attempt to construct a mathematical process framework for scribal behavior so I hope it will at least provoke those that are able to do better to do so.

Counting Errors

The whole of lower textual criticism is focused upon the detection and correction of errors in the scribal copying process. It is theologically interesting that God chose not to impart to the early church the necessary knowledge to make manuscript copying an error free process. In fact the copying methodologies of the Hebrew scribes included many safeguards[3] that would have prevented the wide variance that we see in the New Testament Manuscripts. Yet we find that the New Testament has a large number of manuscripts which differ from each other quite significantly. It will offend some people even to hear that said: nonetheless it is true and given our God is omnipotent and omniscient we must assume the current situation is deliberate.

A reasonable start in the process of textual criticism would therefore be an attempt to ascertain which of the manuscripts contained the most errors. This would not necessarily tell us how to correct those errors but at least we would have a metric for comparing the incoming evidence. This is generally considered to be an untenable problem - how can you tell which manuscripts have errors until you have decided what the script should say? And you can't tell what the manuscript should say without having already evaluated the relative veracity of the incoming manuscripts.

Wilson[4] proposes a potential solution to this problem: the singular reading. Singular readings are those places where all the major manuscripts are agreed on a particular reading except for one. As the chances of that one being right and the others being wrong are negligible it can be assumed that the singular reading is an error. Wilson therefore proposes that a count of the singletons in each manuscript gives an approximation to the number of errors in the manuscript and that it is therefore an indication of its quality.

To understand the strength and weakness of this argument we will need to resort to a little nomenclature. Let A0 be generation zero, the original autographs. Then let Ci(A0) be a copy made of A0 by the ith scribe. Let E() denote the total number of errors in a given manuscript. Then assuming inerrant inspiration of the original autographs we have E(A0) = 0. We assume that E(Ci(A0)) > 0. If we had a collection of n copies of A0 and wanted to find the best one then we would want to compute:

Mini E(Ci(A0))

That is to say we would want to know the i (the scribe) who copied in such a way that the minimum number of errors went into his copy of the original autographs. Now let us define the number of singletons in a document as S(). As we are defining every singleton to be an error and yet we believe there are others other than singletons we know that for all x: S(x) <= E(x). Now if we assume that the ratio of singletons to errors is the same for every document then we may assert: S(x) = k.E(x) for some k. We still don't know k but at least we can go back to our minimization equation and note:

Mini E(Ci(A0)) = Mini S(Ci(A0)) / k

Thus we can simply assert that the best texts are those with the fewest singletons. In fact, we can also make a reasonable approximation to k. If we know the number of singletons per hundred words, and we know the number of manuscripts we have and if we assume that errors in a document are random event in the underlying text then we can compute the chances of two manuscripts have acquired the same error by chance. If the number of singletons per hundred words is low, and if the number of manuscripts is relatively low, then the chances of an error having been hidden is approximately[5]:

n * S(Ci(A0)) / 100

The reciprocal of this therefore gives us our approximation to k and we finally have a mathematical basis upon which to work.

Generations of Error

Unfortunately the mathematics of the above model is based upon too many assumptions to be valid. The key assumption that underpins all of the above and that is blatantly false is that all of the manuscripts we have are copies of A0. The reality is that everything we have is almost certainly copies of copies and probably copies many generations apart.

A third generation copy should properly be written Ck(Cj(Ci(A0))) but I am going to informally write that as Ci3. Now each copying process of the scribe will add new errors and if we assume that each scribe in a given generational tree is equally accurate then we would expect[6]:

E(Cix) = x.E(Ci)

However, the singleton counts will not accumulate the same way. If we have Ci1, Ci2, Ci3 then we will actually find: S(Ci1) = 0, S(Ci2) = 0, S(Ci3) = k.E(Ci)[7]

The reason is that subsequent generations copy the parent's error. When the parent committed the error it was unique and thus a singleton; once it has been copied it loses its singleton status. Thus any manuscript that has been fully or partially copied into another manuscript will have an artificially low singleton count. Further manuscripts towards the end of the copying trees will have singleton counts that accurately reflect the precision of the final copying scribe; however they will have many extra errors accumulated within them which the singletons miss. Specifically, to a first approximation we will find:

S(Cix) = k.E(Cix)/x

This shows that a straightforward singleton count is going to be significantly biased towards later generations of document if significant numbers of ancestor or cousin documents exist. It is difficult to measure the bias in terms of years without knowing the average number of years between generations of a document. Suppose we assume fifty years between generations. Then a 350AD document such as Vaticanus B would be approximately 5th generation. Vaticanus 354 (S) which is 949AD would be approximately 17th generation. Therefore the singleton count for S would be three times lower than for B if they contain the same number of actual errors.

Family Features

Whilst singletons are probably not an accurate indicator of the overall error count in a document they may well be extremely strong indicators of something else: document divergence. If we go back to our original concept, consider two copies made of the original autographs C1(A0) and C2(A0). Then these two copies will each have a number of errors in say ten. Now assume the first copy was copied again making C3(C1(C0)). Then two copies were of C2 forming C4(C2(A0)) and C5(C2(A0)) then A0 and the first two copies were destroyed leaving a total of 4 copies. Then let the two copies just mentioned each be copied destroying the originals and forming C6(C4(C2(A0))) and C7(C5(C2(A0))). Now let C3(C1(C0)) be copied twice and then destroyed forming C8(C3(C1(C0))) and C9(C3(C1(C0))). We now have four copies, with error counts and singleton counts as shown:

  1. C8(C3(C1(A0))) : E() = 30, S() = 10,
  2. C9(C3(C1(A0))) : E() = 30, S() = 10,
  3. C6(C4(C2(A0))) : E() = 30, S() = 20,
  4. C7(C5(C2(A0))) : E() = 30, S() = 20

These, admittedly contrived, numbers suggest that the singleton count really represents the amount of scribal modification that has occurred since a particular document diverged from the other documents represented in the sample. A high degree of modification may be due to one particularly sloppy scribe but could equally represent a long chain of relatively careful copying.

But the above documents actually present a more intriguing possibility. And that is that my somewhat convoluted tale of scribal copying could actually be reproduced from the four remaining documents. To see how that can be we need to introduce the concept of the doubleton. A doubleton is any variant contained in exactly two documents. Considering the four documents above we will see that i and ii will share twenty doubletons[8]. The ten errors introduced by C1 and the ten errors introduced by C3. i will not share any doubletons with iii or iv, neither will ii. iii will share ten doubletons with iv, those errors formed by C2. It will not share any with either i or ii.

The fact that the doubletons are shared between i and ii and also iii and iv tells us that we have two families formed by the original two copies C1 and C2. That i and ii have twice as many doubletons as singletons tells us[9] that the length of the copying chain between when their family diverged and when they diverged from each other is twice as long as the chain from the point they diverged from each other until the extant copies. Similarly the fact that iii and iv have twice as many singletons and doubletons suggests that twice as many generations have occurred since the documents diverged from each other that occurred between the divergence of the family line and the divergence of the two extant documents.

Family Tree Reconstruction

The division of New Testament manuscripts into families is a task that has been pursued via manual methods for at least two hundred years[10]. A significant body of research has already occurred in the field of automatic family tree reconstruction based upon the similarities or otherwise of the different New Testament manuscripts[11]. Oddly for a mathematician I believe that in this particular context we need to strip away the mathematics to see the defects in the existing techniques employed.

These trees, or more accurately dendrograms, may not accurately represent the true family tree of the extant manuscripts but they can at least show graphically some very key information about the manuscripts and the relationships between them. The statistics behind the technique is called clustering which is a well understood technique even in the field to textual criticism.

Whilst the algorithms may be complex the concept is very simple. Imagine a graph where every point on the graph represents a single manuscript. The distance between the points represents the amount of difference between the two manuscripts. Two highly different manuscripts would be far apart; those that are similar would be close. A clustering algorithm then[12] says: "Join every point to its' closest neighbor". This is the first pass of the algorithm and those manuscripts joined together now form a collection of short joined trees. In the second pass each cluster (of two) points are joined to the nearest cluster. We now have half the number of trees and each tree now has three generations. This clustering continues until all of the points are eventually joined together.

At least theoretically each internal node on the tree could represent a 'missing' manuscript that was copied twice and then destroyed. As each cluster is produced the manuscripts in that clusters approximately define the manuscript that was the parent of the cluster. The hope is that the parent of the entire graph once the clustering is complete is the original autograph A0.

Whilst these techniques do produce fairly credible trees it is also found that very few of the documents seem to fall quite as cleanly into a tree pattern as would be expected. Documents that generally appear to fall into one family will occasionally agree with others that are generally from a different line. There are some occasions when there are genuine reasons for this that need to lead to us building a more complex copying model. However I also believe that we have grossly oversimplified the distance metric that we are currently using to measure the distance between two manuscripts and that a refinement of that will lead to significantly better results.

Agreements and Disagreements

In the section Family Features we made some statements regarding the number of doubletons shared between manuscripts i and ii, and also iii and iv. However the statements made were not technically correct. We counted the number of doubletons formed on that particular family line; however the fact is that an error down the other family line can cause the first family line to appear to share an agreement even if the documents have already diverged. In particular consider the copy C3. It will introduce doubletons between i and ii quite correctly. However it also introduces doubletons into iii and iv even though their text doesn't change and they have already diverged they that point. The reality is that the i and ii doubletons are actually doubletons of agreement, the iii and iv ones are really not agreements: they are coincidences or arguably votes of no opinion.

In the case as small as my sample where half of the manuscripts are affected it is impossible to distinguish these coincidences from genuine agreements. However in the general problem where we usually have at least 15 manuscripts to compare we probably can. Most introduced errors happening later in the tree will not produce evenly split groups of difference. Instead two or three manuscripts will have an error introduced the remaining twelve will be unaffected. My claim is that the two or three documents have agreed, the remaining twelve have probably not agreed to their text - they just have it by the coincidence of not having had the error introduced.

Whilst this may seem an arcane technical argument it actually makes a significant difference to where the dots are placed on our clustering graph. Presently critics tend to use a similarity matrix. This is really just a grid showing the number of places in the text where a variation occurs and two documents have the same reading. For the reasons just discussed this matrix will often give false high correlations for groups of documents that are actually not agreeing but that are not on the path of some other source that is diverging heavily.

Consider the following list of variants[13]:

  1. V1 ABCFGH // V2 IJ // V3 DE
  2. V1 ABCDEFGH // V2 IJ
  3. V1 ABCDE // V2 FGHIJ
  4. V1 ABCDEFGH // V2 IJ
  5. V1 ABCDEFGH // V2 IJ
  6. V1 ABCDE // V2 FGHIJ
  7. V1 ABCDEFGH // V2 IJ

Please note that this website now has a tool for computing similarity matrices which is available here.

The similarity matrix is simply account of agrees / places where both have an opinion and would look like:

  A B C D E F G H I J
A 7/7 7/7 7/7 6/7 6/7 5/7 5/7 5/7 0/7 0/7
B 7/7 7/7 7/7 6/7 6/7 5/7 5/7 5/7 0/7 0/7
C 7/7 7/7 7/7 6/7 6/7 5/7 5/7 5/7 0/7 0/7
D 6/7 6/7 6/7 7/7 7/7 4/7 4/7 4/7 0/7 0/7
E 6/7 6/7 6/7 7/7 7/7 4/7 4/7 4/7 0/7 0/7
F 5/7 5/7 5/7 4/7 4/7 7/7 7/7 7/7 2/7 2/7
G 5/7 5/7 5/7 4/7 4/7 4/7 7/7 7/7 2/7 2/7
H 5/7 5/7 5/7 4/7 4/7 4/7 7/7 7/7 2/7 2/7
I 0/7 0/7 0/7 0/7 0/7 2/7 2/7 2/7 7/7 7/7
J 0/7 0/7 0/7 0/7 0/7 2/7 2/7 2/7 7/7 7/7

To see how misleading it can be look at the column for H. What are the closest manuscripts to it? F and G. But now what are the next closest? If your answer was anything but I and J then the matrix has been misleading. What almost certainly happened with the above is that two groups formed: ABCDE and FGHIJ with two main points of disagreement iii and vi. Then I and J got fairly mutilated which has led to a high degree of apparent agreement between the other manuscripts - which is actually entirely false.

I believe these traditional matrices can be improved by attempting to compute the amount of non-coincidental agreement. This should really be done by treating the possibility of two documents turning up in the same variant by chance: this involves the statistics of hypothesis testing within a binomial distribution[14]. However I propose that a simple first order approximation is given by counting the degree of agreement as (1 - ( %age of manuscripts in variant)2 ). Thus when there are ten manuscripts and two are agreeing against eight the amount of agreement between the two is ( 1 - ( 0.2 * 0.2 ) ) = 0.96. The amount of agreement between the eight is ( 1 - ( 0.8 * 0.8 ) ) = 0.36. Thus each false agreement counts as roughly one third of an agreement. Noisy manuscripts still manage to damage the matrices but the effect is significantly reduced[15].

Divergence Tracing

Even with the altered similarity matrix the trees adduced by clustering still have internal nodes that are poor approximations to the actual divergence points of the underlying manuscripts. This is because the clustering is performed on the document distance and therefore manuscripts that stayed close to the original are more likely to end up paired than those that actually stayed closer to each other. We can see this using mathematical notation:

  1. C1(A0)
  2. C2(A0)
  3. C7(C6(C5(C2(A0))))

Assuming as before that ten errors are introduced for each copy we would find that i and ii would be textually closer at twenty deviations that ii and iii which are thirty deviations apart. A traditional clustering technique would thus like i to ii first and then join iii to the result. However ii and iii will share ten doubletons[16] which are really a fingerprint of the copying process C2. I suggest that if a distance metric is constructed based primarily upon these rare co-occurrences then the intermediate nodes are more likely to represent actual points of divergence on the copying tree[17]. The simplest way to do this, which should be attempted first, would be to link those that share most doubletons, moving to tripletons in the event of a tie. This then progresses outwards so that the documents are linked on the basis of those that have agreed most.

If we apply this to our tabulated data we see that first I and J pair with five doubletons. Then D and E pair with one doubleton. The F, G, H and IJ group with a four match[18] and do A, B, C with DE. This accurately replicates the probably copying pattern. Of course the example is simplistic and somewhat contrived, but I consider the technique to be worth attempting.

Manuscript Divisions

Another source of error in clustering New Testament manuscripts is the assumption that each manuscript is a cohesive unit. This assumption is all the more astounding given we have absolute conclusive proof that it is wrong. Even our most celebrated early manuscripts show that they are compilations from what are generally considered divergent families. Specifically:

Codex Sinaiticus: Alexandrian with strains of Western

Codex Alexandrinus: Byzantine Gospels, Alexandrian remainder

Codex Ephraemi Rescriptus: Pieces of everything with a slight bias to Byzantine

This tells us that as early as the fourth century the model of taking a manuscript and copying it was inaccurate. Each copy produced was a compilation of copies of different manuscripts. This may be as simple as some books were taken from one manuscript and other books from others. It could also mean that scribes were already performing an ad-hoc merge process as a kind of early lower textual criticism.

The consequence mathematically is that a portion of text in a heavily Alexandrian manuscript (such as Sinaiticus) may have been clustered to another Alexandrian document even though the portion itself is a close affiliate of some other family. This dilutes the statistical strength of the distance metrics. Worse it means that when document families are being considered in later stages of criticism the text in hand may be being viewed as one family when in fact that particular piece comes from another family.

I propose that we therefore have to divide each manuscript up for the purposes of clustering. The most obvious division would be upon book boundaries. We know that most of the books of the New Testament were written individually and it is not too unreasonable to suppose that books were a logical division in which scribes would work and in which manuscripts would be transmitted. Dividing the manuscripts in this way and then clustering each fragment we may also get some indication as to just how cohesive these manuscripts are. If we find the fragments of a given manuscript all cluster the same way then we may assume it is cohesive. If the fragments end up a significant distance apart then we probably have a compilation.

An alternative method of separating the manuscripts would be to attempt to spot copying boundaries. It is possible that this can be done through the singleton or error distribution in the different pieces of text. Wilson has done some research that suggests that the singleton count per hundred words varies greatly within some manuscripts whilst remaining relatively static within others. These shifts in error count could reflect differing qualities of underlying source document or at least different underlying source documents that are siblings to differing degrees of other source documents.

A somewhat uncomfortable corollary that comes from the discussion of manuscript division is that A0 itself may be a purely theoretical entity that never existed. Whilst each New Testament book was inspired and thus perfect when produced they were produced independently. We know that copies of them existed at an early stage as Peter had copies of Paul's letters. We therefore have to consider the possibility that by the time the canon was formed the documents used for forming the canon were already generational copies of the original autographs. Thus it could easily be the case that competing A0 were actually formed and that each had better examples of some of the books.

Document Reconstruction

Whilst the reconstruction of the copying paths of extant manuscripts may be a fascinating puzzle to solve an entirely valid question is so what? It is always fascinating to know how things happen but as believers we are much more interested in the word of God and this essay appears to have traveled a long way from scribes copying autographs. Even if we had a fully accurate family tree how does this help us to discover what the original autographs said?

The answer is that provided we are prepared to believe in an economy of errors the tree allows us to trace where an error is introduced and therefore which text has the true reading and which is the error. This is easily seen as follows using four variants:

V1 C7(C3(C2(A0)) C6(C1(A0)) // V2  C4(C1(A0)) C5(C4(C1(A0)))

At first glance this is ambiguous; two manuscripts vote for each variant and both chains have similar ages. However looking at the tree we see that the two documents bidding for variant two are closely related, those for the first variant aren't. In fact from this example it is pretty clear that C4 introduced the error and the V1 is the correct variant.

The full text case will not be that simple but it is always possible to compute the version of the text that would be required for the interior nodes that in turns results in the fewest required edits for each of the eventual child texts to be produced. Of course this result may be wrong but at least it is the most mathematically probably text; which is a starting point.

An Implicit Agenda

It is worth stopping at this point to specifically note an agenda that has threaded through this essay and that I believe is the primary contribution this essay has to offer. I believe that the errors are the key to solving the problem. Naturally it is easy to see why this is not the standard approach. Textual criticism has the focus of reconstructing the true text; therefore the true text is usually focused upon and the errors are largely ignored. In the field of data integration this approach has been proved erroneous[19] and I believe the same will prove true in the field of textual criticism.

As long as errors are considered to be a largely unstructured stream of noise then the process of eliminating them will also be a succession of arbitrary and generalized measures such as the canons that are currently employed. If instead the error stream is seen as a sophisticated and structured data stream that has been overlaid on top of the real text then there is some possibility of removing the errors in a similarly organized and scientific fashion.

Scientific error reduction will not be able to solve the genuinely ambiguous cases, but if we can at least solve some of the volumetric problems of New Testament criticism by eliminating those variants that are genuinely and cleanly solvable then we can direct our efforts to the really hard problems.

Error Classification

In the treatment of errors in this essay we have considered all errors to be random mutations of the underlying text that occur in a random and independent fashion. Whilst this may be true of a very few errors it will not be true of the vast majority. The fact is that most of these errors will have been caused by something. This may be an error of the eye, the ear, of memory or of writing.

Further some changes to the underlying text may be an error from our perspective but they may have been intentional corrections from the perspective of the scribe in question. In fact the model gets much more complicated when we allow that some of the intentional corrections may actually have been successful insofar as the original reading was restored. For example if a word was copied incorrectly rendering a word that was not a legitimate word the next scribe might easily have been able to uniquely ascertain what the original word might have been.

There are many texts that already cover the different error types in detail[20] so I shall not do so here. What I do wish to suggest is a method for computing the relative frequency of each type of error. The method is to use the singletons; for these we know they are an error. We can thus classify each singleton into its error class and simply count the relative frequency of each. Once we have the relative frequencies we have a much more accurate metric for reconstructing the original document. For example we currently have the canon: Prefer the shorter reading. If we know the exact ratio of omission to additions we will be able to compute the probability of an addition or omission.

Refining the Scribal Model

Whilst the above analysis allows us to come up with a scientific basis for the assumptions we wish to make I believe we have still simplified the model to a point that most of those assumptions will often be wrong. The simplification we have made is to assume that scribes have behaved in a homogeneous manner. In our mathematical notation we have assumed that Ci = Cj for all i and j. A few moments thought will tell us this is improbable: why would we expect scribes hundreds of miles and years apart to be behaving in an identical manner?

Whilst it is now probably too late to find out information about actual scribes we should be able to construct scribal profiles for the notional scribes that copied the manuscripts of the nodes we constructed in our document lineage. We can do this by computing the error frequencies for each individual manuscript. Then as the family trees are reconstructed we can compute the frequencies for each intermediate node. In this manner we have our believed profile for the error behavior of each scribe. Computing these numbers from known errors allows us to make statistically better calls on the errors the scribe is most likely to have made on the ambiguous cases. For example if a given scribe is known to be three times more likely to omit than to add text then on an ambiguous case the most likely error he would have made is going to be the omission.

The other assumption we have made that is blatantly false is that the error distribution is independent of the underlying text. We would expect certain words or phrases to be more prone to certain forms of error that others. For example certain letters are more likely to be confused that others; similarly words that are close in spelling or are homonyms are more likely to be confused than others. We can obviously speculate as to which is more likely but as we have ways of identifying erroneous cases we can also compute the probability of a given word or phrase being mutilated in a particular way.

The probabilities by scribe are complementary to the statistics on the error frequency by word. The net result is that for any given variation we can accurately compute the probable route by which the variations came about and therefore compute the original version of the variant. Again it must be stressed that this is probability theory. By definition statistics imply an acceptable error rate and simply tries to reduce that rate. This notion is unpalatable in theology and therefore people will wish to use internal evidence to weigh the output of the manuscript weighting.

Textual Criticism through the Ages

It is perhaps a sign of our times that we are inclined to believe that we alone have had the idea of trying to come up with the best version of the underlying text. Although unstated we tend to think of past scribal copying procedures as having be ancient and primitive practices that have been more prone to messing up the underlying text rather than getting it better. Even in our assumption that earlier texts are better we imply that we believe that the scribal process degraded rather than upgraded the underlying documents.

Whilst leaving aside for one moment the question of whether or not they were successful we have to acknowledge that as early as 200AD the concept of textual criticism was well understood. In fact Origen in his famous hexapla[21] produced a parallel Bible with annotations that is not dissimilar to the type of apparatus we use today. It is true that this was an Old Testament apparatus but we have to assume that at least some of the copies made of New Testament manuscripts were made by scribes who were similarly motivated to Origen.

This leads us to an interesting question. If these scribes sitting around in 300AD were indulging in a form of textual criticism then what were the underlying canons that they were using? If the canons that we have today are actually true then is it not likely that the people living 1700 years nearer the time were well aware of them? And if they were aware of them then is it not probable that they would have applied those canons? And if they did then is it not actually the case that the opposite canons should be applied to the result of their output?

The current parlous state of Textual Criticism can be shown by reductio ad absurdum. Imagine a global holocaust that destroys all of our electronic media and leaves but a rubble of most of our towns. Now roll forward to 3000AD where believers are trying to produce an accurate New Testament. What they find are a 17th Century Textus Receptus and then a 20th Century NA text. They note that over time words tend to drop out, readings become more convoluted etc. Thus they form a set of canons that are the opposite of those we have today and from this conclude that the TR is the nearest to a pure text that exists.

21st Century Textual Criticism

As I stated in the introduction I am not a textual critic. But I do have deep skills in the field of data linking, mining and error correction. Frankly my introduction to textual criticism has been fascinating, startling and somewhat depressing. It is probably because of the evolution of the subject and the discipline to which it belongs but from my perspective we have left as mysteries things that ought to have been solved. One of the most telling things one can do is examine lists of resources for textual critics[22]. There are literally hundreds of opinion articles available but the literal Greek text is not available, as far as I know, for even the five most important manuscripts. To put this in context the 40 best manuscripts in textual form would fit uncompressed onto a CD-ROM.

If the Lord does not come and future generations are to view us as anything other than another set of bungling scribes that introduce errors into the text then I believe there are some objective steps we need to take to move this discipline into the 21st Century.

  1. Get the important manuscripts encoded and available. This really isn't a lot of text considering the number of people and millions of dollars spent in this field.
  2. Get all the variants classified by error type. Where possible develop automated mechanisms to do this.
  3. Compute detailed statistics for error frequencies globally. Also error frequencies by underlying text contents.
  4. Using the data constructed above come up with a candidate tree for manuscript relationships.
  5. Compute statistical profiles for each of the notional scribes constructed by the tree
  6. Compute a maximal probability original Greek text based upon the data constructed
  7. Compare this text to the various compiled editions out there
  8. Let the theological battles begin upon the new factual and experimental basis

The biggest step we need to make is more subjective. As a data professional I read the description of Codex Sangermanensis with something approaching awe. It simply states that this is a known copy of D2 and is therefore of no interest to the textual critic. The whole of textual criticism is based upon the activity of biblical scribes. Here is an opportunity to see precisely what the scribes got up to and this is deemed of no interest. We need to move to a mode where we devote effort to intimately understanding the behavior of scribes over time and space as accurately as we can. Only then can we hope to reverse, or accept their work and restore or use the inspired documents that God caused to be created.

Conclusion

God tells us that He does things that are not the way we would do them[23]. I do not understand why He has allowed the reconstruction of His word to be so difficult. I can almost sympathize with the emotion behind a KJV Only position that effectively asserts that God must have given us His word in an unambiguous form and that it therefore might as well have been the KJV. I can also sympathize a little with the approach that effectively asserts that if God only gave us His word approximately than maybe we should apply it approximately and not worry about the details. After all, if you study the differences in English translation that the major text types produce there is nothing of tremendous theological weight[24].

However as I prayed in frustration and even anger about this the thought came to me that perhaps the current situation is to act as a challenge and a warning. How precious is the Word of God to us? Do we read it enough, study it enough, and meditate upon it enough for its' exact contents to really matter? Are we concerned enough about it to do the work to reconstruct what was there? Could it be the case that the situation we are in today was because scribes of old didn't treat the scriptures with the same fanaticism that their Hebrew counterparts did? Are we guilty of a similarly laissez fair attitude?

Only God truly knows the answer to these questions. But I challenge each one of us to ask ourselves if we fully appreciate the value the Bible has and if we act as if we believe what we believe.

Bibliography

KEY ARTICLES BY CITED MATERIALS

BOOKS

INTERNET REFERENCES

JOURNAL ARTICLES:

UNPUBLISHED SOURCES:

Tweet  

JavaScript Not Supported.

JavaScript Not Supported.

JavaScript Not Supported.

The Christian Counter

The Fundamental Top 500