Premises for the Identification of Handwriting
Written by Heidi H. Harralson & Larry S. Miller   

With respect to points of similarity in pattern-based evidence, there is a general misconception that exists respecting fingerprint examination. There is no stipulated number of points of similarity that is statistically supported for the identification of a fingerprint, in existence. The International Association for Identification Standardization Committee conducted a 3-year study and concluded that there was no basis for requiring a predetermined number of friction ridge characteristics in order to establish positive identification (Franck 1996).

There are individual practices of fingerprint technicians, departmental policies, and recommended criteria that call for certain levels of similarity to support positive conclusions-6, 8, 10, 12, or even 14 points, but they are practices, policies, and recommendations. There is no universally accepted or statistically supported number of points of similarity that will serve to discriminate one fingerprint from all others.

In the field of chemistry there are analytical procedures, based on the current knowledge of the chemical composition of compounds that allow an analyst to advance along a program that progressively reduces the number of possible substances that an unknown might be until a single substance remains. There is no point count, however, as the program of analysis varies with the substance being pursued.

Extensive DNA studies have accumulated hordes of information from which the probable occurrence of two persons being the same DNA genotype may be calculated. The levels of probability so calculated are sufficiently low that, for practical purposes, the uniqueness of the genotype is accepted as a certainty.

In fields such as physics, chemistry, and biochemistry, the evidence being sought is thought of as being, for the most part, invariable. Because of its stability, it has been and will continue to be the same for an indeterminate length of time. If it can be found in sufficient quantity it can be identified with known standards, even after long periods. This, however, is not entirely true. The evidence is invariable only if certain conditions are controlled as they usually can be, particularly purity, temperature, and atmospheric pressure. Variables influencing handwriting cannot all be controlled.

The point-count policy, publicly attributed to fingerprint identifications and followed by its technicians, has other inherent errors in it. It implies that each characteristic of the print is of equal significance, so its frequency of occurrence is the same as for any other. This we know is not true for all fingerprint characteristics, but the empirical data has not been broadly studied to determine relative values of significance.

In situations of this kind, science looks closely at the weight to be attached to each of the factors being considered, something that frequencies of occurrence might sustain. Weight, however, is also contingent upon the independence of any factor from any other. Some writing characteristics may be related to the system of writing that was learned. Consequently, some letter designs may exhibit similarities because of a relationship in their background. Point counts, then, are not simple numbers, but are more complex calculations.

The magnitude of the error that a simple point count produces in fingerprint examinations, and the direction in which it may lie is not known. It is assumed, however, that as long as the number of points of similarity is sufficiently large, ample allowance will be made for the inaccuracies that an arbitrary equalization of weights in assessments may generate.

Handwriting is not the same as the subjects of focus in these other fields. It is subject to variation from one occasion to the next and even the range of variations differ with the individual and with the element of writing. Furthermore, writing must be read. It is not free, like DNA, to vary from one subject to another without impairing recognition. It must be reasonably legible in order to be readable. Then too, there are limitations to the manner in which some elements can be varied or changed. How many ways can one write a lowercase, cursive "e" or an "i"? Consequently, for some particular elements of writing, there is greater opportunity for writers to duplicate one another. For this reason, larger combinations of similarities may be required to support conclusions of identity.

Handwriting may never permit precise determinations to be made of the value or significance of any particular writing characteristic or the probability of occurrence level for any combination of characteristics. Nevertheless, approximations are possible that, when used perspicaciously, can be sufficiently reliable for identification purposes.

If then, the analogy to fingerprint identification is not cogent, what analogy can be made to a process that is reasonably reliable and publicly acceptable? In any civil or criminal litigation, the triers of fact take into account various elements of evidence, each having its own significance as a factor in determining the issues of guilt or responsibility. The assignment of significance to each factor is subjective and dependent upon the intelligence, training, and experience of each of the triers of fact, whether they be judge or jury.

The determination that is made is not based on any point count, but it is not less reliable for want of it. Perhaps, having given science and fingerprint identification too much credit for precision, the legal profession has been unaware that its process for reaching conclusions is, in principle, the same as that of the scientist. The court's own criterion for establishing the guilt of a person-beyond a reasonable doubt-discreetly avoids stipulating with certainty or even beyond any doubt (Huber 1972). McElrath and Bearman (1956) stated: "The scientist, too, never proves everything with certainty or beyond a doubt; the best he can hope to say is that he has established a fact beyond a reasonable doubt. The difference between the scientific and the legal situations is that the scientist has learned to calculate the probability of the doubt. This has been the contribution of statistics" (p. 589).

To summarize: (1) There is no minimum point count for fingerprint identification that is supported by statistics and (2) a point count cannot be applied as a primary determination in a matter where the factors being counted vary in their respective significance, without them being appropriately weighted.

One of the ransom letters attributed to Bruno Hauptmann in the Lindbergh baby kidnapping case. Note the awkward construction of words and letters as well as the German influence of the writing.

Relevant Information
There are at least two schools of thought on this subject. Some examiners maintain that to ensure impartiality and to avoid influence, one should not have any background information that might suggest a preferred conclusion or a conclusion consistent with that intimated by other evidence. They need know only what writing is questioned and what writings are known or are standards, and perhaps which standards may be doubtful as to their proof (Stangohr 1984).

On the other hand, there are those who argue that because handwriting is subject to the influence of many factors, from failing health, injury, and circumstances to intoxication, sufficient information regarding the writer and the writing occasion is required. This is in order that due allowances may be made for these conditions, if necessary. Only then can true differences be properly discriminated from natural or unnatural variations. The effects of arthritis on the act of writing is an example of a situation in which information about the writer's state of health could be important to the examiner.

The latter position is accompanied by certain risks. It is difficult for the providers of such information, who cannot avoid being prejudiced to some extent, to supply information that is wholly impartial. Conversations normally ensue that may be brief or lengthy during which biased comments are difficult to preclude. If it was convenient to do so, and continuity could be preserved and not be complicated, it might be suggested that a third party, knowledgeable in forensic identification, might be the recipient of material that would be subsequently turned over to the examiner. Necessary and appropriate information might then be solicited from the submitter, and prejudicial statements filtered out of the conversations when relayed to the examiner. Under these arrangements undue influence could be avoided.

Found and Ganas (2013) reported that the Document Examination Unit of the Victoria Police Forensic Services Department implemented modified procedures that manage and essentially filter irrelevant contextual information concerning handwriting examination casework. Bias, contextual information, and its potential influence in forensic casework is covered in more depth in Chapter 10 concerning the cause of errors.

Statistical Interference in the Identification Process
The task of drawing conclusions from data on hand (i.e., a number of standards) about other data (i.e., an unknown) is a matter of statistical inference. Statistical inference, that is, statistical proof, underlies all scientific investigations in some manner and in order to be a scientific pursuit the identification of handwriting must, knowingly or unknowingly, engage statistical proof. When one brings common sense to bear upon a problem, a mixture of experience and intuition is used. Inferential statistics employ a similar process, substituting data for experience and formulas for intuition. Hence, in practice, statistical methods require us to do, in a more formal and rigorous way, the things that are done, informally, countless times each day.

Whether definitive (i.e., positive) or qualified (i.e., something other than positive), any conclusion of identification derives from statistical inference and is an expression of probability having an arithmetic value somewhere between 0 and 1. In the vocabulary of probability, conclusions of absolute certainty have a value of 1.0 (Bergamini 1963). Matters that are totally improbable, that is to say are impossible, have a value of 0.0. Any other conclusions, which includes all those respecting handwriting, are matters of probability that lie somewhere in-between. The use by handwriting examiners of statements of probability, commonly referred to as qualified opinions in reports or in testimony, has been debated for some time without a clear-cut decision as to their legitimacy (Hilton 1979; Cole 1980). However, proponents of qualified opinions have not attacked the issue from a legitimate platform of statistical inference (Cole 1962, 1964; Schmitz 1967; Duke 1980). On the other hand, opponents of probability statements have not attempted to review the principles underlying the identification process (Dick 1964; McNally 1978).

Taroni et al. (1998) reported Alphonse Bertillon's (1898) belief that only the application of statistical and correlation analysis to the examination of handwriting could justify the existence of the field. This was the substance of the criticisms of Kirk (1953) more than 50 years later. Osborn (1929) suggested a statistical basis to handwriting examination by the application of the Newcomb rule of probability: "The probability of occurrence together of all the events is equal to the continued product of the probabilities of all the separate events" (p. 226).

Osborn was endeavoring to demonstrate how a combination of similar writing habits in two handwritings would occur with the frequency ratio provided by multiplying together the respective ratios of frequencies of occurrence of each of the habits. Furthermore, if a sufficient number of habits were involved, the frequency ratio of their combined occurrence would be such that they could be possessed by only one person in a given population. Unfortunately, perhaps in his desire to be simplistic, Osborn neglected to qualify the events in his statement of the Newcomb rule as independent events, which is the only condition under which the rule is valid. Nevertheless, the rule was adapted to constitute the principle of identification, with its application to document examination particularly in mind. Huber (1959) stated: "When any two items possess a combination of similar and independent characteristics, corresponding in relationship to one another, of such number and significance as to preclude the possibility of coincidental occurrence, without inexplicable disparities, it may be concluded that they are the same in nature or are related to a common source" (p. 276). For the application of the principle to be valid in any discipline, some data must be available to demonstrate the independence and the significance of the characteristics of the items that are under study.

A number of papers have been written over the years concerning the application of statistics, particularly the Bayesian theorem, to handwriting examination. The subject is not new, however. An excellent account of earlier attempts to apply the theorem to writing cases, of which Bertillon's was one of the first (1898), is found in the study by Taroni et al. (1998). Bayes theory allows the examiner or the triers of facts to take into account the relevant population of persons that circumstances and other evidence circumscribe in some practical fashion as encompassing the potential authors of a questioned writing. Thus, if a finding is not definitive, that is, it is a qualified opinion, these other factors may provide sufficient information to render the finding more definitive than it is.

Souder (1934) was one of the few and perhaps the first handwriting examiner to employ the likelihood ratio, a progeny of the Bayes theory, to assess the evidence in the identification of writing: in handwriting we do not have to push the tests until we get a fraction represented by unity divided by the population of the world. Obviously, the denominator can always be reduced to those who can write and further to those having the capacity to produce the work in question. In a special case, it may be possible to prove that one of three individuals must have produced the document. Our report, even though it shows a mathematical probability of only one in one hundred, would then irresistibly establish the conclusion. (pp. 683-684)

Once rebuffed and rejected, the Bayesian approach now has strong advocates for its use in forensic science. Aitken (1987) wrote that the Bayesian approach is the best measure available for evaluating evidence. Gaudette (1986), without identifying its Bayesian basis, wrote on the evaluation of associative physical evidence. Good (1985), a prolific writer on the topic, referred to Bayesian as the weight of evidence.

Alford (1965), perhaps unwittingly, initiated the use of the Bayes theorem and the likelihood ratio in handwriting casework. On the strength of a paper read at the meeting of the American Academy of Forensic Science by Hilton (1958) and Olkin (1958) explained that the likelihood ratio statistic is the ratio of the probability calculated on the basis of the similarities, under the assumption of identity, to the probability calculated on the basis of dissimilarities, under the assumption of nonidentity. Accordingly, the probability of identity in a population of five persons, on the strength of what Hilton calls the joint probability of three writing features, would be 1/5 to the power of 3, or 1/125. Then the probability of nonidentity would be 4/5 to the power of 3, or 64/125 (approximately 1/2). The ratio of "identity" to "nonidentity" in this case is 1/125 divided by 64/125 and equals 1/64. It is considered to be a measure of the likelihood of "chance coincidence." Hence, the smaller this fraction is, or the larger the denominator relative to a numerator of 1, the less likely is coincidence and the stronger the identification.

The likelihood ratio is a statistical means of testing a calculated value derived from a statistical sample. Relative to handwriting examinations, it is the means of determining whether the probability of identity and the probability of nonidentity are significantly different. In other contexts, the term odds is frequently used. To obtain odds, invert the likelihood ratio (determined in the previous example to be 1/64), and say that the odds favoring the identification of this subject are 64 to 1 (Evett 1983). Note that it is the likelihood ratio that is inverted to produce the odds not the joint probability of a number of similarities, which in the example was 1/125.

Others have written on the consideration to be given to the question of relevant populations that may strengthen the findings of a writing examination. Kingston's (1989) view was that it was the role of the judge or jury, not the writing examiner, to apply the modification to the handwriting evidence that other evidence may justify. The question arises, however, as to who would be the most competent for the task: the judge, the jury, or the handwriting examiner?

Statistical inference is still an area being researched which is further illustrated in more recent studies. Specifically, Marquis et al. (2011) applied multivariate likelihood ratios to evaluation of the shape of handwritten characters and studied the Bayes factor of assessment of handwriting features. Davis et al. (2012) studied subsampling to estimate the strength of handwriting evidence. Application of likelihood ratios for handwriting evidence was reported by Hepler et al. (2012), Taroni et al. (2012, 2014), and Tang and Srihari (2014).

Needless to say, the problem with establishing statistical probabilities in handwriting identification lies with the lack of meaningful interval or ratio measures of the features of handwriting construction. Obviously, one might establish parametric measures of angle, baseline, and proportions, but these often change with the same writer. In addition, not every examiner agrees on the weight associated with similarities or dissimilarities in writing features. A fundamental dissimilarity for one examiner may be interpreted as a natural variation or accidental movement with another examiner. Perhaps a more reasonable approach to statistical probabilities in handwriting identification is to use nonparametric procedures using nominal and ordinal level measurements. Much like likelihood ratio and Bayesian, nonparametrics (the chi-square distribution) consider the absence or presence of certain features of the handwriting (nominal) and the range of measures from high to low (ordinal). A number of forensic committees, such as those within the NIST (National Institute of Standards and Technology-US Department of Commerce), are currently examining the utility of statistical probabilities and handwriting identification.

Pattern recognition using computer algorithms in fingerprint identification such as IAFIS (integrated automated fingerprint identification system), facial recognition, and handwritten letter forms (CedarFox) have proved to be successful in narrowing down a specimen from a database comparison. But these are only as good as their database allows and may not take into account handwriting simulations and forgeries. Also, humans continue to be better at pattern recognition than machines through an ability to process minute visual constructs (Vida et al. 2016).

Logic and Reasoning Underlying Identification
The argument for the identification of handwriting is an inductive argument. Any argument is inductive if it may be stated as "it is probable that …" Such arguments simply claim that its conclusion is reasonable to believe in view of the facts set forth as evidence. Deduction is a matter of recognizing valid logical forms, but induction is a matter of weighing evidence. Although inductive reasoning, unlike deductive reasoning, cannot be reduced to precise rules, there are certain important general principles that must be kept in mind.

Induction is a way of reasoning, as is deduction, yet quite different. Whereas deductive conclusions necessarily follow from their reasons, inductive conclusions are either (1) generalizations or (2) hypotheses. First, suppose a test group tastes a dozen Florida oranges, each of which is sweet. On the basis of this information it may be concluded that all Florida oranges are sweet. The conclusion goes beyond the reasons given. Twelve oranges were offered in evidence but the conclusion is more general than the evidence. It is a generalization.

The second kind of inductive argument leads to a hypothesis. It is the character of scientific investigation. It is not a statement about a class, but about an individual, an event, or a state of affairs. A man named "A" murdered Miss "B" and then shot himself, or "X" wrote the anonymous letter "Q." These are inductive conclusions drawn from evidence. The truth of these statements is not established by direct observation. They are hypotheses. In this kind of inductive argument, there are three elements:

1) A number of facts that are the data of the argument (e.g., "X" exhibits certain writing habits. There is a similarity between the writing habits of "X" and features of the writing "Q." There are no observable differences).
2) A hypothesis: that "X" wrote the anonymous letter "Q."
3) Certain generalizations connect the hypothesis with the facts:

a. Handwriting is unique to each individual.
b. Writing is habitual and, therefore, consistent from one execution to the next.
c. Differences in media or time will not significantly alter writing habits to preclude identification.
d. Differences between writers are such that, given sufficient writing standards, examiners can discriminate between the products of most writers.
e. A sufficient number of facts (similarities), in combination, affirms that the hypothesis is acceptable.

As any hypothesis should, this hypothesis derives its persuasiveness from its ability to account for all of the facts. This is the reason why differences or disparities observed in a handwriting comparison must be accounted for before conclusions as to identification can be reached. The underlying principle is clear. Assuming that the generalizations are true, every known fact that can be accounted for by the hypothesis is evidence that the hypothesis is true. While there may be facts that are accounted for by the hypothesis, and that, therefore, constitutes evidence, when it comes to the question whether to believe an inductive conclusion, the problem is more complex.

To begin, any particular fact (e.g., a similarity in a discriminating element in a questioned and known writing) may be an instance of many possible generalizations. For example:

1) In order to be read, any writing will resemble another in some respects.
2) Pupils of the same teacher will write alike.
3) All systems of writing that are taught are similar.

As mentioned earlier, a hypothesis accounts for facts by being connected to them through known generalizations. This is the way lawyers build their cases. It is the way a scientific theory grows and becomes accepted. But, no matter how convincing a hypothesis may be, it is always conceivable that a new discovery will cripple or destroy it. Thus, a hypothesis wears a tentative provisional air. It is accepted and acted upon, but only until a better one comes along (Beardsley 1954).

When there is such strong evidence for a hypothesis (e.g., that "X" wrote the anonymous letter "Q"), that the examiner no longer fears (or hopes for) any further evidence that would be incompatible with it, it is said that the hypothesis has been proved. The practical question is, of course, at what point is the examiner justified regarding the proof of the hypothesis? In handwriting identification terms, that is asking the question how many or what kind of similarities are necessary to prove the hypothesis?

A simple and universal reply cannot be given, for many reasons. Still, there is a key principle that provides a rough estimate of the reliability of a hypothesis. Generally, a fact or a collection of facts can be accounted for by more than one hypothesis. The facts in a writing comparison matter, for example, may be accounted for by either of the hypotheses:

1) "X" wrote the anonymous letter "Q."
2) "X" was taught to write by the same teacher as "Y," that "X" has developed the same writing skill as "Y," that "X" was in the same locale as "Y" at the time "Q" was written, but that "Y" actually wrote the anonymous letter "Q."

Whenever a hypothesis is accepted as true, it is preferred to the alternative hypotheses, which may account for the same facts. Seldom is there one and only one hypothesis. To be reasonable, one must choose the best of a number of hypotheses. This is the root of the problem. How does one determine when one hypothesis is to be preferred to another? What makes it better?

One feature should always be considered when comparing alternative hypotheses to decide which is the more convincing. This is the simplicity of the hypothesis. Other things being equal (a qualification that covers a number of delicate considerations), the simpler of two alternative hypotheses is the preferable one.

Obviously, in the situation suggested above, hypothesis 2 requires one to suppose a longer chain of events than in hypothesis 1. Thus, hypothesis 1 is simpler. Since both may account for the same facts, it is more reasonable to believe 1 than 2. This is not to say with certainty that hypothesis 1 is true. It is an explanation of known facts, it is better than 2, and thus, would be preferred to it. The principle of simplicity is an important and helpful consideration that avoids the fallacy of unnecessary complexity (Beardsley 1954).

The Identification Process and Training
In the identification process, analysis and evaluation are the two aspects of the process that make formal training necessary, the personal presence of a competent teacher essential, and the accumulation of experience mandatory. One must acquire a knowledge of what to look for in writing and how to assess its significance. It is of even greater importance to those who aspire to become document examiners or handwriting experts to recognize the fact that, because analysis and evaluation varies with the particular case material under examination, neither of these facets of the process can be learned completely from books. Evaluation, particularly, must be learned through training and experience. Sufficient empirical data has not yet been accumulated from which the probability of occurrences of particular writing habits may be calculated. Their significance must be subjectively judged on the strength of the experience of the examiner and his or her tutors. Thus, one's ability to judge the significance of a feature of writing is an element of one's competence that the self-taught novice has difficulty developing.

The trained mind conducts the more thorough and efficient analysis, seeking the more credible evidence, disregarding the trivial, unearthing that of which the lay mind is unaware. In comparisons, experience and familiarity enables one to make far more delicate and precise distinctions. But it is probably in the area of evaluation that knowledge, training, experience, and skill make their greatest contribution. What analysis has found and comparison has revealed, only proper evaluation can make useful.

Once this process of identification is understood and appreciated, progress may be made in identifying matters not within one's normal purview. Thus, problems involving Chinese or Eskimo writing may be intelligently tackled by persons who don't speak the languages. Typewriters may be identified by persons never employed in a typewriter factory. Printing methods may be differentiated by persons never engaged as printers. Counterfeit currency can be identified reliably by document examiners who never made a dollar.

Critiquing Handwriting Expertise
The profession of the document examiner, the handwriting examiner, or the handwriting expert has existed in North America for well over a century and undoubtedly longer in Europe. Its practitioners and their labors have been accepted as reliable, and their findings have been considered believable by the judiciary, the courts, and the layman for as many years, notwithstanding the fact that, as with many developing professions, there have been those within it whose services have not been of the highest caliber. The need for these services within the criminal justice and civil litigation systems, the remuneration that seemed warranted, and the absence of a standard to be met, were often the reasons that less qualified individuals were persuaded to become involved, and the process undoubtedly produced some errors.

With the passage of time and the growth of the profession, better methods of training, broader consultation and discussion, and the sharing of knowledge that stems from experience have evolved to furnish greater consistency in methods and more reliability in results. Nevertheless, until recently, an apprenticeship process has tended to underlie the training of neophytes in handwriting identification.

It is not surprising then that the question should be raised as to whether any empirical data exists to support the claim that handwriting specialists possess a handwriting expertise, an expertise that laypersons do not normally acquire. Prior to 1990, there were only a very few studies that examined the reliability of handwriting identification by document examiners, which was reported by Risinger et al. (1989) after what they claimed to have been an exhaustive literature search.

There has been a response from researchers and forensic handwriting examiners as to the criticisms raised by Risinger et al. (1989), Risinger and Saks (1996), Saks and VanderHaar (2005), Risinger (2007), and the NAS report (2009) among others, concerning the reliability of handwriting identification by examiners. Kam et al. (1994), in a study of small samples (seven professionals from a single source, that is, the FBI Laboratories, and 10 nonprofessionals) confuted the null hypothesis suggested by Risinger et al. (1989) that there was no difference between professed handwriting examiners and lay persons in the examination of writing samples. Kaye (1994) commented on the shortcomings of the study and the difficulty in generalizing from the information provided by the limited sample size.

However, reliable and acceptable evidence has been published to dispute Risinger et al.'s disparaging indictments. Studies published in peer-reviewed journals prove the ability of handwriting examiners to distinguish between genuine and simulated handwriting and to match handwriting samples (Kam et al. 1994; Kam et al. 1997; Found et al. 1999; Found et al. 2001b; Kam et al. 2001; Sita et al. 2002; Found and Rogers 2003; Kam and Lin 2003). Compiling the error rates from the published studies, handwriting examiner error rates were in the range of 0.04% to 9.3%. Some of the studies reported that handwriting examiners exhibited more skill than laypersons in performing handwriting identification (layperson error rates were in the range of 26.1% to 42.86%).

The research published thus far has demonstrated handwriting examiner reliability on tasks including both handwriting and signatures, including detection of simulations produced by expert penmen (Dewhurst et al. 2008). Proficiency in evaluating disguised and simulated signatures and/or handwriting was researched by Found and Rogers (2008), Al-Musa Alkahtani et al. (2010), and Bird et al. (2010a, 2010b, 2011, 2012). The nature of handwriting examiner's authorship opinions was evaluated over a 5-year period of blind validation trials (Found and Rogers 2008). Dewhurst et al. (2014) looked at the effects of motivation on the behavior of lay subjects when participating in handwriting trials. The research in this area helps to target problematic areas that can be corrected by training and testing, but also alerts the trier of fact as to the limitations in the field. These studies have helped to define when caution needs to be exercised in expressing handwriting opinions. More research is required and is ongoing with respect to problematic handwriting tasks. Until specific research is conducted on special tasks, and experts are proficiency tested on a wide variety of difficult or special tasks, the Kumho Tire ruling that advises expert acceptance based on a review of the "task at hand" is prudent (Risinger 2000).

There may be some examiners that feel that in studies of this kind the handwriting examiner must achieve near perfect results, whereas the lay person should be expected to achieve only the level of chance. This may explain the reluctance of some to submit to such studies and risk a reflection upon their competence and/or on the discipline at large. A properly conducted study, however, designed to confirm or to dispute the allegations of Risinger et al. needs only to reveal a statistically significant difference between the scores of the professionals and of the laypersons to prove the hypothesis that handwriting expertise indeed exists.


Aitken, C. G. G. (1987). The use of statistics in forensic science. Journal of the Forensic Science Society, 27(2), 113-115.

Al-Musa Alkahtani, A. (2010). The ability of forensic handwriting examiners to judge the quality of signature simulations in an unfamiliar writing system. Journal of the American Society of Questioned Document Examiners, 13(2), 65-69.

Alford, E. A. (1965). Identification through comparison of numbers. Identification News, July, pp. 13-14.

Beardsley, M. C. (1954). Thinking straight: A guide for readers & writers. New York: Prentice-Hall.

Bergamini, D. (1963). Mathematics. New York: Time.

Bertillon, A. (1898). La comparaison des écritures et l'identification graphique. Paris: Revue Scientifique. Retrieved from

Bird, C., Found, B., Ballantyne, K., & Rogers, D. (2010b). Forensic handwriting examiners' opinions on the process of production of disguised and simulated signatures. Forensic Science International, 195(1), 103-107.

Bird, C., Found, B., & Rogers, D. (2010a). Forensic document examiners' skill in distinguishing between natural and disguised handwriting behaviors. Journal of Forensic Sciences, 55(5), 1291-1295.

Bird, C., Found, B., & Rogers, D. (2012). Forensic handwriting examiners' skill in detecting disguise behavior from handwritten text samples. Journal of Forensic Document Examination, 22, 15-23.

Bird, C., Stoel, R. D., Found, B., & Rogers, D. (2011). Skill characteristics of forensic handwriting examiners associated with simulated handwritten text. Journal of the American Society of Questioned Document Examiners, 14(2), 29-34.

Cole, A. (1962). Qualified vs. no conclusion reports. Identification News, 12(4), 6-7.

Cole, A. (1964). Qualifications in reports and in testimony. Paper presented at the meeting of the American Society of Questioned Document Examiners, Denver, CO.

Cole, A. (1980). The search for certainty and the uses of probability. Journal of Forensic Sciences, 25(4), 826-833.

Davis, L. J., Saunders, C. P., Hepler, A., & Buscaglia, J. (2012). Using subsampling to estimate the strength of handwriting evidence via score-based likelihood ratios. Forensic Science International, 216 (1-3), 146-157.

Dewhurst, T., Found, B., & Rogers, D. (2008). Are expert penmen better than lay people at producing simulations of a model signature? Forensic Science International, 180(1), 50-53.

Dewhurst, T. N., Found, B., Ballantyne, K. N., & Rogers, D. (2014). The effects of extrinsic motivation on signature authorship opinions in forensic signature blind trials. Forensic Science International, 236, 127-132.

Dick, R. M. (1964). Qualified opinions in handwriting examinations. Paper presented at the meeting of the American Society of Questioned Document Examiners, Denver, CO.

Duke, D. M. (1980). Handwriting and probable evidence. Paper presented at the meeting of the International Association for Identification, Ottawa, ON, Canada.

Evett, I. W. (1983). What is the probability that this blood came from that person? A meaningful question? Journal of the Forensic Science Society, 23(1), 35-39.

Found, B., & Rogers, D. (Eds.). (1999). Documentation of forensic handwriting comparison and identification method: A modular approach. Journal of Forensic Document Examination, 12, 1-68.

Found, B., Rogers, D., & Herkt, A. (2001b). The skill of a group of document examiners in expressing handwriting and signature authorship and production process opinions. Journal of Forensic Document Examination, 14, 15-30.

Found, B., & Rogers, D. (2003). The initial profiling trial of a program to characterize forensic handwriting examiners' skill. Journal of the American Society of Questioned Document Examiners, 6, 72-81.

Found, B., & Rogers, D. (2008). The probative character of forensic handwriting examiners' identification and elimination opinions on questioned signatures. Forensic Science International, 178(1), 54-60.

Found, B., & Ganas, J. (2013). The management of domain irrelevant context information in forensic handwriting examination casework. Science & Justice, 53(2), 154-158.

Franck, F. E. (1996). Objective standards: Fingerprint identifications vs. handwriting identifications. Paper presented at the meeting of the American Society of Questioned Document Examiners, Washington, DC.

Gaudette, B. D. (1986). Evaluation of associative physical evidence. Journal of the Forensic Science Society, 26(3), 163-167.

Good, I. J. (1985). Weight of evidence: A brief survey. In J. M. Bernardo, M. H. DeGroot, D. V. Lindley, & A. F. M. Smith (Eds.), Bayesian statistics 2: Proceedings of the second Valencia international meeting, September 6-10, 1983 (pp. 249-270). Amsterdam: Elsevier.

Hepler, A.B., Saunders, C.P., Davis, L.J., & Buscaglia, J. (2012). Score-based likelihood ratios for handwriting evidence. Forensic Science International, 219(1-3), 129-140.

Hilton, O. (1958). The relationship of mathematical probability to the handwriting identification problem. In R. A. Huber (Ed.), Questioned Documents in Crime Detection: Proceedings of the R.C.M.P. Crime Detection Laboratories Seminar no. 5 held at Ottawa, Oct. 27-Nov. 1, 1958 (pp. 121-130). Ottawa: Queen's Printer.

Hilton, O. (1979). Is there any place in criminal prosecutions for qualified opinions by document examiners? Journal of Forensic Sciences, 24(3), 579-581.

Huber, R. A. (1959). Expert witnesses: In defence of expert witnesses in general and of document examiners in particular. Criminal Law Quarterly, 2(3), 276-295.

Huber, R. A. (1972). The philosophy of identification. Royal Canadian Mounted Police Gazette, 34(7/8), 8-14.

Kam, M., Wetstein, J., & Conn, R. (1994). Proficiency of professional document examiners in writer identification. Journal of Forensic Science, 39(1), 5-14. doi: 10.1520/JFS13565J.

Kam, M., Fielding, G., & Conn, R. (1997). Writing identification by professional document examiners. Journal of Forensic Sciences, 42(5), 778-786.

Kam, M., Gummadidala, K., Fielding, G., & Conn, R. (2001). Signature authentication by forensic document examiners. Journal of Forensic Sciences, 46, 884-888.

Kam, M., & Lin, E. (2003). Writer identification using hand-printed and non-hand-printed questioned documents. Journal of Forensic Sciences, 48(6), 1391-1395.

Kaye, D. H. (1994). Commentary on "Proficiency of professional document examiners in writer identification." Journal of Forensic Sciences, 39(1), 5-14.

Kingston, C. (1989). A perspective on probability and physical evidence. Journal of Forensic Sciences, 34(6), 1336-1342.

Kirk, P. L. (1953). Crime investigation: Physical evidence and the police laboratory. New York: Interscience.

Marquis, R., Bozza, S., Schmittbuhl, M., & Taroni, F. (2011). Handwriting evidence evaluation based on the shape of characters: Application of multivariate likelihood ratios. Journal of Forensic Sciences, 56(1), S238-S242.

McElrath, G. W., & Bearman, J. E. (1956). Scientific method, statistical inference, and the law. Science, 124(3222), 589-590.

McNally, J. P. (1978). Certainty or uncertainty. Paper presented at the meeting of the International Association of Forensic Sciences, Wichita, KS.

NAS (National Research Council-National Academy of Sciences). (2009). Strengthening forensic science in the United States: A path forward. Washington, DC: National Academies Press.

Olkin, I. (1958). The evaluation of physical evidence and the identity problem by means of statistical probabilities. Paper presented at the meeting of the American Academy of Forensic Sciences, Cleveland, OH.

Osborn, A. S. (1929). Questioned documents (2nd ed.). Albany, NY: Boyd Printing.

Risinger, D. M. (2000). Defining the 'Task at Hand': Non-science forensic science after Kumho Tire v. Carmichael. Washington & Lee Law Review, 57, 767-800.

Risinger, D. M. (2007). Cases involving the reliability of handwriting identification expertise since the decision in Daubert. Tulsa Law Review, 43(2), 477-595.

Risinger, D. M., Denbeaux, M. P., & Saks, M. J. (1989). Exorcism of ignorance as a proxy for rational knowledge: The lessons of handwriting identification "expertise." University of 
Pennsylvania Law Review, 137(3), 731-792. doi: 10.2307/3312276.

Risinger, D. M., & Saks, M. J. (1996). Science and nonscience in the courts: Daubert meets handwriting identification expertise. Iowa Law Review, 82, 21.

Saks, M. J., & VanderHaar, H. (2005). On the "general acceptance" of handwriting identification principles. Journal of Forensic Sciences, 50(1), 119-126.

Schmitz, P. L. (1967). Should experienced document examiners write inconclusive reports? Paper presented at the meeting of the American Society of Questioned Document Examiners, San Francisco, CA.

Sita, J., Found, B., & Rogers, D. K. (2002). Forensic handwriting examiners' expertise for signature comparison. Journal of Forensic Sciences, 47(5), 1117-1124.

Souder, W. (1934). The merits of scientific evidence. Journal of the American Institute of Criminal Law and Criminology, 25, 683-684.

Stangohr, G. R. (1984). Elusive and indeterminate results. Paper presented at the meeting of the American Society of Questioned Document Examiners, Nashville, TN.

Tang, Y., & Srihari, S. N. (2014). Likelihood ratio estimation in forensic identification using similarity and rarity. Pattern Recognition, 47(3), 945-958.

Taroni, F., Champod, C., & Margot, P. (1998). Forerunners of Bayesianism in early forensic science. Jurimetrics, 38(2), 183-200.

Taroni, F., Marquis, R., Schmittbuhl, M., Biedermann, A., Thi.ry, A., & Bozza, S. (2012). The use of the likelihood ratio for evaluative and investigative purposes in comparative forensic handwriting examination. Forensic Science International, 214(1-3), 189-94.

Taroni, F., Marquis, R., Schmittbuhl, M., Biedermann, A., Thi.ry, A., & Bozza, S. (2014). Bayes factor for investigative assessment of selected handwriting features. Forensic Science International, 242(1-3), 266-73.

Vida, M.D., Nestor, A., Plaut, D.C., & Behrmann, M. (2016). Spatiotemporal dynamics of similarly-based neural representations of facial identity. Proceedings of the National Academy of Sciences. doi: 10.1073/ PNAS 1614763114. 

< Prev

Product News

Six interchangeable LED lamps

highlight the features of the OPTIMAX Multi-Lite Forensic Inspection Kit from Spectronics Corporation. This portable kit is designed for crime-scene investigation, gathering evidence, and work in the forensic laboratory. The LEDs provide six single-wavelength light sources, each useful for specific applications, from bodily fluids to fingerprints. The wavelengths are: UV-A (365 nm), blue (450 nm), green (525 nm), amber (590 nm), red (630 nm), and white light (400-700 nm). The cordless flashlight weighs only 15 oz. To learn more, go to: