Summary of Topics
A Daubert Hearing for Fingerprint Identification
Scientific Evidence And The Expert Examiner.
In the 1990’s the case of Daubert vs. Merill Dow was a defining crossroads regarding the introduction of scientific evidence within federal courts. An issue decision within the case required that science and the scientist must meet specific criteria to ensure the validity of the testimony. The science itself must follow basic scientific protocol and the scientist must be trained to competency. Ultimately, the judge is the gate-keeper of the court and the judge now has new tools in which to evaluate evidence.
These new rules have created a needed division between science and non-science or pseudo-science.
Daubert: Scientific Evidence And The Expert Examiner.
Five main considerations relevant to scientific evidence and federal court.
1. Whether the scientific theory/technique can and has been tested.
2. Whether the theory/technique has undergone peer review/publication.
3. What the known or potential rate of error is.
4. Existence and maintenance of standards controlling the technique’s operation.
5. Whether the theory/technique has gained general acceptance in the scientific community.
Fingerprint Identification
Fingerprint comparison science Dactyloscopy, is a forensic science. However, the designation of “forensic science” is not the separation of forensics from the true meaning of science. The definition of forensic science is simply “the application of science to law.” (Saferstein) Fingerprint identification utilizes many scientific disciplines such as biology, genetics, embryology, and statistics.
A general definition of a fingerprint specialist is a “practitioner of fingerprint science, with specific applied skill.” The scientific aspects are fingerprint identifications foundations in the various scientific fields such as biology, embryology, chemistry, and statistics. The art aspect, as with many of the sciences, is the “specific applied skill.” Science itself can be divided into three main categories. These are theory, research, and application. Each of these areas utilizes a blend of pure scientific method and specific applied skill. Fingerprint identification has its conceptual foundation and its application in these areas.
The concept of (individualized) identification itself is based on the familiar fact that nature does not repeat itself. “Nature exhibits an infinite variety of forms.” (Quetelet) Every tangible object or person is different in innumerable ways. The closer something is examined the more differences one will find. This difference or detail is information about the subject’s uniqueness. This is called the Principle Of Individualization. This concept can be taken down to the atomic level where Heisenberg’s Uncertainty Principle applies. This is the point at which uniqueness in form and concept ceases to have informational value.
Fingerprint identification is a comparison of friction skin or impressions of friction skin for purposes of evaluating the similarities and non-similarities of the permanent characteristics contained therein. Multiple impressions of the same area of friction skin will contain the same information within the spatial relationships of the characteristics themselves that have been reproduced. Fingerprint identification, specifically individualization, is based on this principle and can best be defined with the following premises:
- Friction skin ridge detail is unique and permanent, from birth till death’s
decomposition.
- The arrangement of this detail is also unique and permanent.
- Providing sufficient detail is present in the impressions, identification is possible.
In Francis Galton’s book published in the early 1890’s he made light of his research on the permanence of the unique details found in friction skin. At that time the permanence was inferred from studies of friction skin over decades. Sir William Herschel also studied the permanence of friction skin in the early 1890’s. Today over a century later, fingerprint specialists have the opportunity to study the friction skin’s details of uniqueness and permanence over the entire lives of individuals. The discovery of permanence in the unique details used for identification was the corner stone in the developing science of Dactyloscopy.
Limitations regarding the identification principle would likely be found in the lack of information available for comparison. Without sufficient information in which to compare, exclusion and individualization cannot occur. Theoretically, it is not possible to collect or evaluate all the information on any tangible item. Yet, providing that sufficient detailed information is present identification is possible.
Generally, fingerprint impressions, including both exemplar and latent, yield much more information that is statistically necessary for conclusive individualization of the impression’s source. This total information is not necessarily limited to the Galton characteristics of ending ridges, bifurcations and dots. Other types of characteristics may also be present and these can include general patterns, ridge flow, and general ridge characteristics such as ridge edge structure and pores.
In 1973 the International Association for Identification in the United States and Canada, along with Great Britain in 2000 eliminated the practice of fingerprint point or characteristic minimum number standards. It was long realized that the foundation of minimum standards based on Galton points alone was not logical within a holistic process. The very concept of identification must be potentially based on all the information available, not just one aspect of that information. The establishment of minimum point requirements was simply an effort to eliminate any debate on the issue. Edmond Locard’s 1914 statistical study on Galton points set the foundation for most established point minimums. The countries that do establish point minimums range from 7 points in Sweden to 16-17 in Italy. (Champod)
Testing
There are three testing components to fingerprint identification. The first is testing of the scientific concept of fingerprint identification. The second is the testing of two individual fingerprint impressions by means of comparison of unique characteristics. The third is the verification process in which a qualified peer reexamines the identification and conducts a second comparative test to ensure the accuracy of the identification.
In regards to the scientific testing of fingerprint identification, as with most statistical studies, it is not feasible or practical to take a concept to its ultimate test in order to figure reasonable odds for specific components of that test. To fingerprint every person in the world (Almost 7 billion) and compare friction skin is certainly not practicable nor is it necessary. Fortunately, the sample size of the world’s fingerprint databases are extremely large with the Federal Bureau of Investigation having one of the largest with about 270 million cards representing 2.7 billion fingerprints plus additional palm and foot print impressions.
A test by Lockheed Martin Corporation and the FBI in the late 1990’s showed that fingerprints can be reliably and statistically matched to each other provided sufficient information is available. This was applicable to both rolled fingerprints and partial fingerprints representing crime scene latent fingerprints. 50,000 prints were compared to an additional 50,000 prints. The fingerprints used in the test were taken from the Federal Bureau of Investigation’s exemplar files. The compared prints were all of a single pattern type to further test the threshold for identification by eliminating the obvious fingerprint pattern type differences. Subsequently, scores were calculated from the information available from the comparisons. The test used both rolled impressions of the last joint of the finger and a second test utilized just a fraction of the original impression. This test showed that identical prints and partial prints utilizing only a fraction of the information illustrated a high differentiation between identical sourced and not identical sourced fingerprints. The test also inadvertently discovered multiple duplicate fingerprints entered into the database. That is, fingerprints from a single individual had multiple entries in the test group.
A more practical test of fingerprint identification’s accuracy is preformed on a regular basis in many fingerprint identification bureaus during training exercises. This involves the individualization of known persons by examiners in training. When a computer database is utilized for search and comparison results, the candidates offered for comparison by the computer are generated from a database that may contain millions of fingerprints. Similar to part of the Lockheed Martin test, these searched fingerprints often represent only a small fragment of a fingerprint or palmar friction skin. Biometric companies frequently test the accuracy of their equipment using known fingerprints. This demonstrates the fundamental effectiveness and validity of the fingerprint identification process.
Since a database may contain many millions of fingerprint files, the only logical way for these matches to be made is due to the valid concept of fingerprint identification called Dactyloscopy. Random matches would fail to produce the many thousands of matches, as a random hit, according to statistical studies, would entail billions-to-one odds, depending on the size and quality of the database. Modern automated fingerprint identification systems (A.F.I.S.) have accuracy rates of about 95%. This means that the computer is able to find the correct match in its candidate list about 95% of the time. The remaining 5% does not indicate a false match, but rather that matching prints were not found in the candidate list after a search. See Chapter 11 for more information. Of the various automated biometric identification processes, friction skin individualization is the most accurate.
The actual comparison process for fingerprint identification is known as the ACE-V process and has been outlined in detail by D. Ashbaugh in the publication: Quantitative-Qualitative Friction Ridge Analysis (Ashbaugh 1999). While not every agency and examiner follows the exact terminology of the ACE-V methodology, it has been shown in training classes that the underlying concepts of ACE-V are indeed universal in formalized comparative methodologies. While fingerprint identification based on various aspects of scientific disciplines, such as biology, and statistics. While subjectiveness is a component found in the comparative analysis of qualitative aspects of friction skin impressions, this does not ultimately discredit the process. Subjective aspects are invariably found throughout science, especially within the more complex biological related subjects. Quality levels are qualitatively analyzed using comparative data based on experience of examiner. The ACE-V process is analogous to the scientific method. According to Pat Wertheim, a renowned United States latent print examiner, ACE-V’s follows the scientific methods outline for observation, hypothesis, testing, conclusion, and reliable predictability. (Langenburg)
Biometric companies including Automated Fingerprint Identification Systems (A.F.I.S.) also test on a regular basis to determine the effectiveness of their latest system developments. Any deviation from the expected and established concepts of fingerprint identification would be of considerable interest to fingerprint examiners. No research has noted a defect in the theory of biological uniqueness, or its sub-field, Dactyloscopy.
“Although the earlier statistical models of random placement of ridge detail do not take into account all features of a fingerprint, each latent search being conducted on the A.F.I.S. system is attempting to disprove the theory of fingerprint individuality.” Clark, 2002) “At least 1000 latent searches are conducted each day. This equates to 38.8-80 billion searches a day….” (when all factors are considered). (Clark, 2002) The overall accuracy of fingerprint identification has been demonstrated to be a product of the complete identification methodological process. This includes verification of an examiner’s conclusion. While blind proficiency tests are not a routine part of all fingerprint examiner training, it is thought that blind testing is a useful training tool. Yet, with regard to methodological accuracy rates it has not been shown to affect accuracy and reliability issues, as it is not representative of the entire ACE-V process.
Fingerprint Identification and peer review
The scientific field of forensic identification has a worldwide organization “The International Association for Identification” that produces the bi-monthly “Journal of Forensic Identification.” This organization and its publication include research on fingerprint identification. The general forensic scientist organization, the “American Academy of Forensic Sciences,” also has a publication called “Journal of Forensic Sciences” that also includes information and research on fingerprint identification. Research on fingerprint identification dates back over 115 years.
Numerous other publications exist including such books as:
- Quantitative-Qualitative Friction Ridge Analysis (Ashbaugh 1999)
- The Science of Fingerprints (Classification) (US Dept. Of Justice, Rev.1984)
- Advances in Fingerprint Technology (Lee, Gaensslen, 1991)
Peer review in general also includes the verification process, which is a standard part of the comparison methodology. It is not uncommon for a fingerprint comparison to be reviewed by experts from a different agency. The verification process can include expert hypothesis review by persons from local, state, and federal agencies as well as international organizations.
The verification process itself is typically a complete reanalysis of the stated hypothesis of individualization. While some critics of the process have stated that the verification process is not blind, it is rarely practical or logical to have a completely blind verification. The reason for this is that the search aspect of the comparison process is very time consuming. It would not be feasible to replicate this search process for each verification performed. It is important to note that when utilizing a computerized database search, the computer, using a special algorithm, generates a candidate list of subject fingerprints or palm prints for the examiner to compare. It would not be feasible to search the entire database [of millions] in order to verify a match made utilizing this information. Nor would it be necessary for an expert to re-check all other subjects past or presently submitted for comparison to be re-checked in order to verify a current hypothesis of individualization!
Fingerprint Identification known rate of error
“Two types of error are involved (in fingerprint identification): Practitioner error and the error of the science of fingerprints. …nobody knows exactly how many comparisons have been done and how many people have made mistakes… (In regards to the error of science) the error rate for the science itself is zero.” (Wertheim) This notion is founded on two main principles. The first notion is biological uniqueness and the second is that of sufficient information that, would in turn, support the very concept of uniqueness. Provided the examiners are trained to competency, and established methodology is followed, practitioner errors can be minimized. “There are only three conclusions a latent print examiner can come to when comparing two prints: Identification, Elimination, or Insufficient detail to determine identification. (With regard to the relevant conclusions of Identification) … the science allows for only one correct answer, and unless the examiner makes a mistake, it will be the correct answer.” (Wertheim) Human error does not invalidate the scientific process. This is where expert examiners can possibly arrive at different conclusions. A faulty analysis of available latent print information can possibly skew a provisional hypothesis, and in rare events sequential faulty analysis can render false positives and false negatives in the comparison process. (Also see: Non-Specificity and Within Expert Problem Solving in Forensic Comparison, Academia.com)
To understand the rates of error involved in Dactyloscopy one must first understand the process involved. A provisional hypothesis is the application of ACE. ACE-V hypothesis are the peer-reviewed conclusions of "Individualization or Exclusion." The ACE-V result of "inconclusive" is simply a statement that insufficient information is available to render a conclusion.
ACE-V process is the formal process acknowledge for comparative evidence submissions in federal court. Rates of error of this process must be calculated from results of the complete and proper application of the ACE-V methodological process not just ACE. Furthermore, it is a standard requirement of expert testimony, that the expert be truly competent in the field in which they are to testify. Competency is paramount in any formalized comparative analysis. Thus, to calculate the rate of error for a practitioner using the ACE-V methodology the following requirements must be considered in order for a hypothesis to have to have any scientific value.
ACE-V Error Rate Calculation Requirements.
A. Original examiner must be trained to competency.
B. A proper comparison methodology must be used.
C. Sufficient information must be present for comparison.
D. A provisional hypothesis must be applicable only to relevant comparison information.
E. Verification of the provisional hypothesis must be made by an examiner that is
experience and trained to competency.
F. Relevant information regarding the process and results must be documented.
Forensic Science is the application of science with respect to established law. Ultimately, the goal of forensic individualization is twofold. First, is that all legal casework reports utilize the proper application of the specific methodology to include peer-review. Second, is that the application of that methodology be applied by properly trained and competent examiners.
Except for training and process improvement purposes, it makes little sense to consider errors that are the result of the improper application of the established methodology. Unfortunately, this is a common error in itself. An example would be calculation of test scores as being representative of a scientific process. Normal testing, to include many blind studies do not test the complete process. Test takers are frequently prone to guessing at correct answers as it would invariably help their test score if the question is guessed correctly, whereas a question left blank it would count as an error. Students as well as under-trained examiners are another source of errors in their provisional hypotheses in which their level of competency is insufficient for the proper application of the scientific methodology. Due to the complex nature of forensic comparison, even competent examiners must be peer reviewed to help ensure human errors are kept at an absolute minimum. Thus, it is helpful to think of ACE-V as the ACEV process, whereas the methodology is a complete unhyphenated process. Simply put, comparative analysis requires verification.
The following calculation will allow for a practical estimation regarding the overall application of ACE-V errors per agency. Since agencies or labs work as groups in the required verification aspect, only group error rate can be calculated even though the ultimate task of the court is to understand if the particular case at issue is accurate.
Error Calculation Of Known ACE-V Individualization Error Rates Per Agency or Laboratory.
A = Total ACE-V comparisons per agency per year.
B = Total ACE-V Process application individualization errors.
B/A = C (Total ACE-V errors in relation to individualization the methodological applications)
This will provide a practical error rate that takes into account the real world application of the ACE-V process. While it may be said that a properly applied methodology will yield the correct answer, ultimately we would like to estimate the practical application error rate of ACE-V itself. Most clerical errors that may cause identification documentation problems are sorted and corrected at the verification level. In some cases, two verification steps are utilized. This sequential analysis often called administrative verification and/or sequential analysis is a quality control measure to ensure that proper the methodology was applied correctly, the hypothesis was correct, and that related protocols have been utilized.
In several decades of fingerprint comparisons the average expert fingerprint examiner may only see a few false or misidentifications resulting from the application, or rather, misapplication of the ACE-V process. Considering that over a career many examiners compare hundreds of thousands of fingerprints, and these hypotheses are verified, a low error rate is evident.
Statistical models of rolled fingerprint impressions likely to match another fingerprint impression not of the same source have been published. It must be noted that these models only use a specific level or type of available characteristic called a Galton characteristic. Even with this limitation it has been noted the odds of a match of non-identical prints would be a (1x1097) to one. (Lockheed Martin) This astronomical number illustrates the fact why fingerprint identification can be effected with much less information, such as with a small section of friction skin as is commonly found in latent fingerprint impressions, where the odds were calculated at approximately 1x1027.
Other statistical probability models have been attempted since 1892 when Sir Francis Galton calculated the odds of a non-identical sourced match at 9.54 x 10-7. There have been about sixteen other serious attempts. None of these studies were able to consider the qualitative aspects of the comparative process.
“It (fingerprint statistics) is a very complex issue, … and there have been only a few attempts to construct a suitable statistical model (Osterburg et al., 1977; Sclove, 1979, 1980; Stoney and Thornton, 1986; Hrechak and McHugh, 1990: Mardia et al, 1992).” “Suppose a suspect has been chosen on the basis of fingerprint evidence alone. The underlying population from which the suspect has to considered as being selected has been argued as being that of the whole world (Kingston, 1964). However, Stoney and Thornton (1986) argued that it is rarely that case that a suspect would be chosen purely on the basis of fingerprint evidence. Normally, there would have been a small group of suspects, which would have been isolated from the world population on the basis of other evidence, though Kingston (1988) disagreed with this. The fingerprint evidence has then to be considered relative to this small group only.” (Aitken, 1995)
The supporting statistical models do vary according to the total information available, and this is the crutch of complexity within all forensic comparative analysis. If the information in a fingerprint impression falls below a relative threshold the supporting statistics become insufficient for an individualization to be effected. This threshold is relative to the individual quality of the impression itself and can be considered as the information in which the impression contains. Experience and knowledgeable fingerprint experts will understand the implications of the statistical variances on the possibility of accurate fingerprint individualization. Fingerprints that are not of identification quality levels are quite common and are known as smudges.
Ultimately, the problem with statistical models is that the complex comparative process cannot be completely objective regarding the interpretation of reproduction qualities and degrees of distortion as these features are not consistent. The pliability of the skin and the variation of the impressions require a degree of experience in order to properly evaluate these variances. Essentially, varying quality and interpretation of data will yield inconsistent statistics overall and will prevent the possibility of precision statistical evaluation be established. Accordingly, one must look at ACE-V methodology error rates as a practical estimation of real-world process errors.
It is important to note that this calculated ACE-V error rate cannot be applied to a specific ACE-V hypothesis of individualization, but only to the process as a whole as each comparison varies in quantitative and qualitative information with larger information sets having a higher overall probability of being accurately matched.
Fundamentally, accuracy is best approached through quality training, practical experience, and proper comparative methodology to include verification. Individual case accuracy is best understood via case review utilizing competent experts.
Fingerprint Identification acceptance in the scientific community
In the United States of America all 50 States have been utilizing fingerprint identification for the purposes of individualization for many decades. Canada, along with Great Britain and many other nations also use fingerprint identification. Fingerprint identification has been in general practice for about 100 years.
Fingerprint Identification is also used in hospitals for newborn identification, in the military for casualty identification, mass disaster victim identification. Several biometric companies also utilize various aspects of the science of fingerprint identification in the development, application, and engineering of their identification products.
C. Coppock
Reference:
Aitken, C.G.G. (1995) Statistics and the Evaluation of Evidence for Forensic Scientists:
John Wiley & Sons Ltd., West Sussex, England
Champod, Christophe (1995) Edmond Locard – Numerical Standards & “Probable”
Identifications. Journal of Forensic Identification 138/45 (2) 1995
Clark, John D. (2002) ACE-V: Is it scientifically Reliable and Accurate? Journal of
Forensic Identification 52 (4), 2002 / 401
Heisenberg, Werner 1901-1976 German physicist (Nobel 1932)
Langenburg, Glenn: Forensic Scientist, The Detail 9-30-02 clpex.com
Lockheed Martin, Federal Bureau of Investigation (Meagher, Budowle, Zieswig 1999) (50k x 50k Fingerprint Test) (unpublished)
Saferstein, Richard (1977) Criminalistics p1: Prentice-Hall, New Jersey
Wertheim, Kasey (2001) Article: Daubert and Error Rate in Latent Print Testimony.
clpex.com
Quetelet, Adolph; Belgian statistician.
No comments:
Post a Comment