Monday, March 27, 2006

The Individualization of Whom?

Individualization of Whom?
The relativity of fingerprint identification information
______________________________

The identification of individuals is becoming increasingly important in modern society. Kindergarten through 12 grade schools, are now checking criminal histories on teachers. Special permits may also require a background check via fingerprint identification. Government and business security interests also want to know who you are, and courts want to be sure that fingerprint testimony is accurate. What exactly is taking place when examiners make a fingerprint individualization? Is it really as simple as making a comparison? Juries may believe just that. However, our responsibilities go far beyond offering an expert opinion. An examiner’s responsibility also includes what information is to be used and what value that information rightly deserves. We must also be aware that errors in fingerprint science may originate from a multitude of sources. An error is not necessarily a methodology error; errors may derive from the lack of knowledge regarding the evidence, or simple clerical errors that are not immediately recognized. The important question examiners must constantly ask themselves is: To whom are you referring to and what is their relationship to the crime? Fingerprint examiners must ensure they are comparing the relevant persons.
One of the obvious benefits of fingerprint identification is to help solve crime. The fingerprint of a residential burglar found on a ‘point of entry’ window may be of great help in the prosecution of a criminal. Subsequently, investigators, prosecutors, judges and juries base their decisions on a fingerprint examiners evaluation of the evidence. It is important to remember that the prosecutors, judge and jury will not have had the opportunity to review the crime scene, thus they may not understand just what “value” the evidence has. Fingerprint identification is information. Information about people that helps carries investigations forward. In order to offer the most accurate information possible we must be aware of all the variables related to that information. First we should look at what it takes to make a latent print comparison. The following list is of some of the simple requirements for a comparison.

“BASIC CONDITIONS REQUIRED FOR A PRINT COMPARISON”
A. Developed / Photographed latent print of sufficient quality.
B. Available exemplar prints of sufficient quality.
C. A qualified latent print examiner utilizing proper methodology.
D. A second qualified latent print examiner utilizing proper methodology.

(Only verification can uncover errors regarding provisional individualization and missed-provisional individualizations.)
This list is the minimum requirements for fingerprint comparisons, whether it is a comparison of a10-print or a latent print. Failure to follow proper methodology and established protocols relating to a fingerprint comparison, is unscientific and encourages error and inaccuracies.
Assuming a provisional individualization is made and subsequently verified, we now must turn our attention to the latent print itself. We may or may not know the true origin of the latent print used in our comparison. A crime scene latent print impression may arrive into our possession in a number of different ways. In many cases we simply may not know how the latent print originated. We can only surmise the details of its deposition and subsequent collection. It is important to learn as much as possible about the evidence you are analyzing. The proper evaluation of the evidence will help to understand its value as a source of information. The following is a list of possibilities relating to a crime scene latent print impression. Depending on the actual scene conditions a print may have more than one possibility.

“POSSIBILITIES FOR THE ORIGIN OF A CRIME SCENE LATENT PRINT”
A. The latent print was deposited on the item at time of manufacture.
B. The latent print was deposited on the item before its arrival at the scene.
C. The latent print was deposited on the item before the crime, yet after the items arrival at the
crime scene.
D. The latent print was deposited on the item during the crime.
E. The latent print was deposited on the item after the crime.
F. The latent print was deposited after a specific date, such as a time of cleaning or availability.
G. The latent print was deposited at a known time due to limited access or recording of the
event.
H. There is an error with the associated information related to the latent prints origin.
I. The latent print is a lateral transfer. The original source may not be known.
J. The wrong latent was used in the comparisons.
K. The latent print is fabricated.
L. The information associated with the latent print is fabricated.
M. The crime itself is fabricated.

It is important to be aware of all these possibilities when evaluating a latent print impression. While several items on this list are very rare, the fact that they are a possibility is reason for consideration during the evaluation. The more accurate the evidence is, the more valuable the information derived from a fingerprint comparison becomes. This illustrates the need for detailed information about the latent print as well as its source.
One of the ‘basic conditions required for a comparison’ are exemplar prints. Of course, in some cases unknown print impressions can be compared to other unknown prints to see if the source is the same, however most cases will try to answer the basic question of; “to whom do these prints belong?” For this, exemplar prints are needed. Exemplar prints must be of sufficient quality and they should have accurate associated information. Exemplar prints with inaccurate data will lower the value of the information provided by individualization or exclusion. In some cases inaccurate data can invalidate the information derived from the comparison.
Exemplar prints arrive in our possession in several different ways and can be made by several different mediums. This may be ink, 300 dpi or 500 dpi live-scan, photographs, etc... . As with latent print impressions, it is imperative to understand the value and accuracy of the information provided by the exemplar. The following is a list of possible sources for exemplar prints.

“ACQUISITION SOURCES OF EXEMPLAR PRINTS”
A. Jail booking.
B. Permit applications.
C. Background / Criminal history reviews.
D. Alien registration.
E. Military files.
F. Voluntary submissions.
G. Indirect acquisitions. (Other agencies or historical records.)
H. Other legal documents.
I. Duplication of existing exemplars via computer, xerographic, or photographic reproduction.
J. Photographs of friction skin.
K. Developed latent prints purposefully used as exemplars.
L. Covert acquisition. (Latents as exemplars)
M. Exemplar with incorrect personal data.
N. Exemplar with incorrect print impression data.
O. Autopsy exemplars

When these sources are reviewed for the ‘value’ of the information they may contain, we see that there is room for various types of errors to enter the equation. Jail bookings may generate many different names and associated information for a single individual, at least until such a time that the fingerprints are compared and the records consolidated. Yet, after consolidation of the record were the exemplars with the old data destroyed? The same is true for some of the other types of exemplars. With voluntary submissions we may not always know if the person that submitted the prints is the same persons who may be suspected of a crime or needed for elimination purposes. Is the correct information, yet incorrect fingerprints on the exemplar card you want to compare? What would the value be of a print exclusion with such a case? Did someone insert the right card into the live-scan printer at the wrong time only to have different fingers printed? Did some agency six states away send you the right “John Smith”? How good is that information typed fifty years ago on your only exemplar card? Did that person lie the first and only time they were arrested?
Fingerprint identification is great for record consolidation and identifying individual print impressions, however it is ultimately ineffective at telling us who is who and of what value the associated information may be. Examiners should remain keenly aware of the variables that accompany our print individualization and our exclusions. This is a very important responsibility. Examiners must help maintain the integrity of all the information associated with print individualizations and exclusions, as examiners are often the best persons to recognize inconsistencies that may lower the value of the information provided. This especially important within agencies that have a separation between the latent print examiners and the crime scene and evidence processing. Good communication is essential to the accurate evaluation of related information.

Craig A. Coppock

A Daubert Hearing for Fingerprint Identification

Legal Courts and Dactyloscopy
Summary of Topics
A Daubert Hearing for Fingerprint Identification

Scientific Evidence And The Expert Examiner.

In the 1990’s the case of Daubert vs. Merill Dow was a defining crossroads regarding the introduction of scientific evidence within federal courts. An issue decision within the case required that science and the scientist must meet specific criteria to ensure the validity of the testimony. The science itself must follow basic scientific protocol and the scientist must be trained to competency. Ultimately, the judge is the gate-keeper of the court and the judge now has new tools in which to evaluate evidence.
These new rules have created a needed division between science and non-science or pseudo-science.

Daubert: Scientific Evidence And The Expert Examiner.

Five main considerations relevant to scientific evidence and federal court.
1. Whether the scientific theory/technique can and has been tested.
2. Whether the theory/technique has undergone peer review/publication.
3. What the known or potential rate of error is.
4. Existence and maintenance of standards controlling the technique’s operation.
5. Whether the theory/technique has gained general acceptance in the scientific community.

Fingerprint Identification
Fingerprint comparison science Dactyloscopy, is a forensic science. However, the designation of “forensic science” is not the separation of forensics from the true meaning of science. The definition of forensic science is simply “the application of science to law.” (Saferstein) Fingerprint identification utilizes many scientific disciplines such as biology, genetics, embryology, and statistics. 

A general definition of a fingerprint specialist is a “practitioner of fingerprint science, with specific applied skill.” The scientific aspects are fingerprint identifications foundations in the various scientific fields such as biology, embryology, chemistry, and statistics. The art aspect, as with many of the sciences, is the “specific applied skill.” Science itself can be divided into three main categories. These are theory, research, and application. Each of these areas utilizes a blend of pure scientific method and specific applied skill. Fingerprint identification has its conceptual foundation and its application in these areas.
The concept of (individualized) identification itself is based on the familiar fact that nature does not repeat itself. “Nature exhibits an infinite variety of forms.” (Quetelet) Every tangible object or person is different in innumerable ways. The closer something is examined the more differences one will find. This difference or detail is information about the subject’s uniqueness. This is called the Principle Of Individualization. This concept can be taken down to the atomic level where Heisenberg’s Uncertainty Principle applies. This is the point at which uniqueness in form and concept ceases to have informational value.

Fingerprint identification is a comparison of friction skin or impressions of friction skin for purposes of evaluating the similarities and non-similarities of the permanent characteristics contained therein. Multiple impressions of the same area of friction skin will contain the same information within the spatial relationships of the characteristics themselves that have been reproduced. Fingerprint identification, specifically individualization, is based on this principle and can best be defined with the following premises:

- Friction skin ridge detail is unique and permanent, from birth till death’s
decomposition.
- The arrangement of this detail is also unique and permanent.
- Providing sufficient detail is present in the impressions, identification is possible.

In Francis Galton’s book published in the early 1890’s he made light of his research on the permanence of the unique details found in friction skin. At that time the permanence was inferred from studies of friction skin over decades. Sir William Herschel also studied the permanence of friction skin in the early 1890’s. Today over a century later, fingerprint specialists have the opportunity to study the friction skin’s details of uniqueness and permanence over the entire lives of individuals. The discovery of permanence in the unique details used for identification was the corner stone in the developing science of Dactyloscopy.
Limitations regarding the identification principle would likely be found in the lack of information available for comparison. Without sufficient information in which to compare, exclusion and individualization cannot occur. Theoretically, it is not possible to collect or evaluate all the information on any tangible item. Yet, providing that sufficient detailed information is present identification is possible.
Generally, fingerprint impressions, including both exemplar and latent, yield much more information that is statistically necessary for conclusive individualization of the impression’s source. This total information is not necessarily limited to the Galton characteristics of ending ridges, bifurcations and dots. Other types of characteristics may also be present and these can include general patterns, ridge flow, and general ridge characteristics such as ridge edge structure and pores.

In 1973 the International Association for Identification in the United States and Canada, along with Great Britain in 2000 eliminated the practice of fingerprint point or characteristic minimum number standards. It was long realized that the foundation of minimum standards based on Galton points alone was not logical within a holistic process. The very concept of identification must be potentially based on all the information available, not just one aspect of that information. The establishment of minimum point requirements was simply an effort to eliminate any debate on the issue. Edmond Locard’s 1914 statistical study on Galton points set the foundation for most established point minimums. The countries that do establish point minimums range from 7 points in Sweden to 16-17 in Italy. (Champod)


Testing

There are three testing components to fingerprint identification. The first is testing of the scientific concept of fingerprint identification. The second is the testing of two individual fingerprint impressions by means of comparison of unique characteristics. The third is the verification process in which a qualified peer reexamines the identification and conducts a second comparative test to ensure the accuracy of the identification.

In regards to the scientific testing of fingerprint identification, as with most statistical studies, it is not feasible or practical to take a concept to its ultimate test in order to figure reasonable odds for specific components of that test. To fingerprint every person in the world (Almost 7 billion) and compare friction skin is certainly not practicable nor is it necessary. Fortunately, the sample size of the world’s fingerprint databases are extremely large with the Federal Bureau of Investigation having one of the largest with about 270 million cards representing 2.7 billion fingerprints plus additional palm and foot print impressions. 

A test by Lockheed Martin Corporation and the FBI in the late 1990’s showed that fingerprints can be reliably and statistically matched to each other provided sufficient information is available. This was applicable to both rolled fingerprints and partial fingerprints representing crime scene latent fingerprints. 50,000 prints were compared to an additional 50,000 prints. The fingerprints used in the test were taken from the Federal Bureau of Investigation’s exemplar files. The compared prints were all of a single pattern type to further test the threshold for identification by eliminating the obvious fingerprint pattern type differences. Subsequently, scores were calculated from the information available from the comparisons. The test used both rolled impressions of the last joint of the finger and a second test utilized just a fraction of the original impression. This test showed that identical prints and partial prints utilizing only a fraction of the information illustrated a high differentiation between identical sourced and not identical sourced fingerprints. The test also inadvertently discovered multiple duplicate fingerprints entered into the database. That is, fingerprints from a single individual had multiple entries in the test group. 

A more practical test of fingerprint identification’s accuracy is preformed on a regular basis in many fingerprint identification bureaus during training exercises. This involves the individualization of known persons by examiners in training. When a computer database is utilized for search and comparison results, the candidates offered for comparison by the computer are generated from a database that may contain millions of fingerprints. Similar to part of the Lockheed Martin test, these searched fingerprints often represent only a small fragment of a fingerprint or palmar friction skin. Biometric companies frequently test the accuracy of their equipment using known fingerprints. This demonstrates the fundamental effectiveness and validity of the fingerprint identification process.

Since a database may contain many millions of fingerprint files, the only logical way for these matches to be made is due to the valid concept of fingerprint identification called Dactyloscopy. Random matches would fail to produce the many thousands of matches, as a random hit, according to statistical studies, would entail billions-to-one odds, depending on the size and quality of the database. Modern automated fingerprint identification systems (A.F.I.S.) have accuracy rates of about 95%. This means that the computer is able to find the correct match in its candidate list about 95% of the time. The remaining 5% does not indicate a false match, but rather that matching prints were not found in the candidate list after a search. See Chapter 11 for more information. Of the various automated biometric identification processes, friction skin individualization is the most accurate.

The actual comparison process for fingerprint identification is known as the ACE-V process and has been outlined in detail by D. Ashbaugh in the publication: Quantitative-Qualitative Friction Ridge Analysis (Ashbaugh 1999). While not every agency and examiner follows the exact terminology of the ACE-V methodology, it has been shown in training classes that the underlying concepts of ACE-V are indeed universal in formalized comparative methodologies. While fingerprint identification based on various aspects of scientific disciplines, such as biology, and statistics. While subjectiveness is a component found in the comparative analysis of qualitative aspects of friction skin impressions, this does not ultimately discredit the process. Subjective aspects are invariably found throughout science, especially within the more complex biological related subjects. Quality levels are qualitatively analyzed using comparative data based on experience of examiner. The ACE-V process is analogous to the scientific method. According to Pat Wertheim, a renowned United States latent print examiner, ACE-V’s follows the scientific methods outline for observation, hypothesis, testing, conclusion, and reliable predictability. (Langenburg)
Biometric companies including Automated Fingerprint Identification Systems (A.F.I.S.) also test on a regular basis to determine the effectiveness of their latest system developments. Any deviation from the expected and established concepts of fingerprint identification would be of considerable interest to fingerprint examiners. No research has noted a defect in the theory of biological uniqueness, or its sub-field, Dactyloscopy.

“Although the earlier statistical models of random placement of ridge detail do not take into account all features of a fingerprint, each latent search being conducted on the A.F.I.S. system is attempting to disprove the theory of fingerprint individuality.” Clark, 2002) “At least 1000 latent searches are conducted each day. This equates to 38.8-80 billion searches a day….” (when all factors are considered). (Clark, 2002) The overall accuracy of fingerprint identification has been demonstrated to be a product of the complete identification methodological process. This includes verification of an examiner’s conclusion. While blind proficiency tests are not a routine part of all fingerprint examiner training, it is thought that blind testing is a useful training tool. Yet, with regard to methodological accuracy rates it has not been shown to affect accuracy and reliability issues, as it is not representative of the entire ACE-V process.

Fingerprint Identification and peer review

The scientific field of forensic identification has a worldwide organization “The International Association for Identification” that produces the bi-monthly “Journal of Forensic Identification.” This organization and its publication include research on fingerprint identification. The general forensic scientist organization, the “American Academy of Forensic Sciences,” also has a publication called “Journal of Forensic Sciences” that also includes information and research on fingerprint identification. Research on fingerprint identification dates back over 115 years.

Numerous other publications exist including such books as:

- Quantitative-Qualitative Friction Ridge Analysis (Ashbaugh 1999)
- The Science of Fingerprints (Classification) (US Dept. Of Justice, Rev.1984)
- Advances in Fingerprint Technology (Lee, Gaensslen, 1991)

Peer review in general also includes the verification process, which is a standard part of the comparison methodology. It is not uncommon for a fingerprint comparison to be reviewed by experts from a different agency. The verification process can include expert hypothesis review by persons from local, state, and federal agencies as well as international organizations.
The verification process itself is typically a complete reanalysis of the stated hypothesis of individualization. While some critics of the process have stated that the verification process is not blind, it is rarely practical or logical to have a completely blind verification. The reason for this is that the search aspect of the comparison process is very time consuming. It would not be feasible to replicate this search process for each verification performed. It is important to note that when utilizing a computerized database search, the computer, using a special algorithm, generates a candidate list of subject fingerprints or palm prints for the examiner to compare. It would not be feasible to search the entire database [of millions] in order to verify a match made utilizing this information. Nor would it be necessary for an expert to re-check all other subjects past or presently submitted for comparison to be re-checked in order to verify a current hypothesis of individualization!

Fingerprint Identification known rate of error

“Two types of error are involved (in fingerprint identification): Practitioner error and the error of the science of fingerprints. …nobody knows exactly how many comparisons have been done and how many people have made mistakes… (In regards to the error of science) the error rate for the science itself is zero.” (Wertheim) This notion is founded on two main principles. The first notion is biological uniqueness and the second is that of sufficient information that, would in turn, support the very concept of uniqueness. Provided the examiners are trained to competency, and established methodology is followed, practitioner errors can be minimized. “There are only three conclusions a latent print examiner can come to when comparing two prints: Identification, Elimination, or Insufficient detail to determine identification. (With regard to the relevant conclusions of Identification) … the science allows for only one correct answer, and unless the examiner makes a mistake, it will be the correct answer.” (Wertheim) Human error does not invalidate the scientific process. This is where expert examiners can possibly arrive at different conclusions. A faulty analysis of available latent print information can possibly skew a provisional hypothesis, and in rare events sequential faulty analysis can render false positives and false negatives in the comparison process.  (Also see: Non-Specificity and Within Expert Problem Solving in Forensic Comparison, Academia.com)

To understand the rates of error involved in Dactyloscopy one must first understand the process involved. A provisional hypothesis is the application of ACE. ACE-V hypothesis are the peer-reviewed conclusions of "Individualization or Exclusion." The ACE-V result of "inconclusive" is simply a statement that insufficient information is available to render a conclusion. 

ACE-V process is the formal process acknowledge for comparative evidence submissions in federal court. Rates of error of this process must be calculated from results of the complete and proper application of the ACE-V methodological process not just ACE. Furthermore, it is a standard requirement of expert testimony, that the expert be truly competent in the field in which they are to testify. Competency is paramount in any formalized comparative analysis. Thus, to calculate the rate of error for a practitioner using the ACE-V methodology the following requirements must be considered in order for a hypothesis to have to have any scientific value.

ACE-V Error Rate Calculation Requirements.

A. Original examiner must be trained to competency.
B. A proper comparison methodology must be used.
C. Sufficient information must be present for comparison.
D. A provisional hypothesis must be applicable only to relevant comparison information.
E. Verification of the provisional hypothesis must be made by an examiner that is
experience and trained to competency.
F. Relevant information regarding the process and results must be documented.

Forensic Science is the application of science with respect to established law. Ultimately, the goal of forensic individualization is twofold. First, is that all legal casework reports utilize the proper application of the specific methodology to include peer-review. Second, is that the application of that methodology be applied by properly trained and competent examiners. 

Except for training and process improvement purposes, it makes little sense to consider errors that are the result of the improper application of the established methodology. Unfortunately, this is a common error in itself. An example would be calculation of test scores as being representative of a scientific process. Normal testing, to include many blind studies do not test the complete process. Test takers are frequently prone to guessing at correct answers as it would invariably help their test score if the question is guessed correctly, whereas a question left blank it would count as an error. Students as well as under-trained examiners are another source of errors in their provisional hypotheses in which their level of competency is insufficient for the proper application of the scientific methodology. Due to the complex nature of forensic comparison, even competent examiners must be peer reviewed to help ensure human errors are kept at an absolute minimum. Thus, it is helpful to think of ACE-V as the ACEV process, whereas the methodology is a complete unhyphenated process. Simply put, comparative analysis requires verification.
The following calculation will allow for a practical estimation regarding the overall application of ACE-V errors per agency. Since agencies or labs work as groups in the required verification aspect, only group error rate can be calculated even though the ultimate task of the court is to understand if the particular case at issue is accurate.

Error Calculation Of Known ACE-V Individualization Error Rates Per Agency or Laboratory.

A = Total ACE-V comparisons per agency per year.
B = Total ACE-V Process application individualization errors.
B/A = C (Total ACE-V errors in relation to individualization the methodological applications)

This will provide a practical error rate that takes into account the real world application of the ACE-V process. While it may be said that a properly applied methodology will yield the correct answer, ultimately we would like to estimate the practical application error rate of ACE-V itself. Most clerical errors that may cause identification documentation problems are sorted and corrected at the verification level. In some cases, two verification steps are utilized. This sequential analysis often called administrative verification and/or sequential analysis is a quality control measure to ensure that proper the methodology was applied correctly, the hypothesis was correct, and that related protocols have been utilized.

In several decades of fingerprint comparisons the average expert fingerprint examiner may only see a few false or misidentifications resulting from the application, or rather, misapplication of the ACE-V process. Considering that over a career many examiners compare hundreds of thousands of fingerprints, and these hypotheses are verified, a low error rate is evident. 

Statistical models of rolled fingerprint impressions likely to match another fingerprint impression not of the same source have been published. It must be noted that these models only use a specific level or type of available characteristic called a Galton characteristic. Even with this limitation it has been noted the odds of a match of non-identical prints would be a (1x1097) to one. (Lockheed Martin) This astronomical number illustrates the fact why fingerprint identification can be effected with much less information, such as with a small section of friction skin as is commonly found in latent fingerprint impressions, where the odds were calculated at approximately 1x1027.

Other statistical probability models have been attempted since 1892 when Sir Francis Galton calculated the odds of a non-identical sourced match at 9.54 x 10-7. There have been about sixteen other serious attempts. None of these studies were able to consider the qualitative aspects of the comparative process.
“It (fingerprint statistics) is a very complex issue, … and there have been only a few attempts to construct a suitable statistical model (Osterburg et al., 1977; Sclove, 1979, 1980; Stoney and Thornton, 1986; Hrechak and McHugh, 1990: Mardia et al, 1992).” “Suppose a suspect has been chosen on the basis of fingerprint evidence alone. The underlying population from which the suspect has to considered as being selected has been argued as being that of the whole world (Kingston, 1964). However, Stoney and Thornton (1986) argued that it is rarely that case that a suspect would be chosen purely on the basis of fingerprint evidence. Normally, there would have been a small group of suspects, which would have been isolated from the world population on the basis of other evidence, though Kingston (1988) disagreed with this. The fingerprint evidence has then to be considered relative to this small group only.” (Aitken, 1995)
The supporting statistical models do vary according to the total information available, and this is the crutch of complexity within all forensic comparative analysis. If the information in a fingerprint impression falls below a relative threshold the supporting statistics become insufficient for an individualization to be effected. This threshold is relative to the individual quality of the impression itself and can be considered as the information in which the impression contains. Experience and knowledgeable fingerprint experts will understand the implications of the statistical variances on the possibility of accurate fingerprint individualization. Fingerprints that are not of identification quality levels are quite common and are known as smudges. 

Ultimately, the problem with statistical models is that the complex comparative process cannot be completely objective regarding the interpretation of reproduction qualities and degrees of distortion as these features are not consistent. The pliability of the skin and the variation of the impressions require a degree of experience in order to properly evaluate these variances. Essentially, varying quality and interpretation of data will yield inconsistent statistics overall and will prevent the possibility of precision statistical evaluation be established. Accordingly, one must look at ACE-V methodology error rates as a practical estimation of real-world process errors. 

It is important to note that this calculated ACE-V error rate cannot be applied to a specific ACE-V hypothesis of individualization, but only to the process as a whole as each comparison varies in quantitative and qualitative information with larger information sets having a higher overall probability of being accurately matched. 

Fundamentally, accuracy is best approached through quality training, practical experience, and proper comparative methodology to include verification. Individual case accuracy is best understood via case review utilizing competent experts.

Fingerprint Identification acceptance in the scientific community

In the United States of America all 50 States have been utilizing fingerprint identification for the purposes of individualization for many decades. Canada, along with Great Britain and many other nations also use fingerprint identification. Fingerprint identification has been in general practice for about 100 years.
Fingerprint Identification is also used in hospitals for newborn identification, in the military for casualty identification, mass disaster victim identification. Several biometric companies also utilize various aspects of the science of fingerprint identification in the development, application, and engineering of their identification products.

C. Coppock

Reference:
Aitken, C.G.G. (1995) Statistics and the Evaluation of Evidence for Forensic Scientists:
John Wiley & Sons Ltd., West Sussex, England

Champod, Christophe (1995) Edmond Locard – Numerical Standards & “Probable”
Identifications. Journal of Forensic Identification 138/45 (2) 1995

Clark, John D. (2002) ACE-V: Is it scientifically Reliable and Accurate? Journal of
Forensic Identification 52 (4), 2002 / 401

Heisenberg, Werner 1901-1976 German physicist (Nobel 1932)

Langenburg, Glenn: Forensic Scientist, The Detail 9-30-02 clpex.com

Lockheed Martin, Federal Bureau of Investigation (Meagher, Budowle, Zieswig 1999) (50k x 50k Fingerprint Test) (unpublished)

Saferstein, Richard (1977) Criminalistics p1: Prentice-Hall, New Jersey

Wertheim, Kasey (2001) Article: Daubert and Error Rate in Latent Print Testimony.
clpex.com

Quetelet, Adolph; Belgian statistician.

Minimum Information And Fingerprint Identification

Minimum Information And Fingerprint Identification
_____________________________________________________________________

The concept of forensic identification is based on the evaluation of information. With fingerprint identification, information is analyzed and compared to available exemplar and other sources to determine if the impressions in question originated from one and the same source. Fingerprint identification information sources are generally divided into three levels. Level one is macro detail such as ridge flow and pattern type. Level two is the Galton characteristics, or points of identification, such as bifurcations and ending ridges. Level three information is contained in the structure of the ridges themselves. Of course, multiple types of forensic information can be found in a fingerprint. Not only can all three levels be present, but other information can also be available such as DNA, chemical information, as well as information contained in the distortion of the print itself. These additional sources of information are not considered as formal levels of comparison, yet they may offer additional means to allow for individualization and/or evidence correlation.
For many decades, print examiners have been asked during testimony; “what the minimum point requirement is for print identification?” At one time this was a acceptable question since many countries required a minimum number of Galton characteristics before an identification was legally accepted. However, research into the statistical study of fingerprint identification has shown that there is no statistical foundation for a minimum point requirement. Level three detail, when present in an impression, is valid for comparison since it is also permanent and unique. Accordingly, the question of how much non-Galton detail information is needed is pointless since the statistical nature of randomness is in itself unique. Likewise, the information is too complex to be effectively quantized for statistical model comparison. The detail being analyzed in a fingerprint comparison is potentially the most multifarious of any forensic science.
An understanding of the information being analyzed and how it is being analyzed is the key to deducing its value for identification purposes. Information is always found in related groups of varying content. This can be thought of as a principle of minimum information. A single bit of information is not possible due to the fact that at the very least it will have a relationship with one other bit of information such as its opposite value. The very fact that an identifying characteristic is present is information. Its minimum opposite value would be the fact that we know it is not a missing. With this reasoning the lack of information also has value. Additional information allows further information correlation at relative quality and quantity values. Information can be evidence. Its proper evaluation and correlation is imperative.
Thus, the discovery and documentation of a single characteristic generates more information than the simple fact of that characteristic’s existence. With comparative analysis, it generates information on its relative position, size, shape. etc. Uniqueness, such as that of a Galton characteristic, is simply a large grouping of information. Hence, uniqueness can be defined as; sufficient information that allows for a relative distinction or possibly an individualization.
With fingerprint identification a variable threshold for individualization can exist based on quantitative and qualitative information values. With threshold comparisons this grouping aspect of discovered information is more noticeable as the examiner focuses their attention on the limited details. Threshold fingerprint identifications are based on the evaluation of groups of related information of varying quality. Accordingly, there is never a single bit of information that would make the difference between a conclusion of individualization or non-individualization.
Most statistic models that have illustrated the concept of fingerprint identification have been limited to specific levels of detail such as Galton points. The exclusion of third level detail omits the very foundation of forensic identification. Its inclusion would further strengthen the statistical model. However, if the complexity of statistical models involving third level detail could be overcome we would still encounter the minimum information principle and its relative nature.

Following is a list of relationships of level two friction ridge characteristics.  This list is expanded to 75 but this is not the limit.  Palm print impressions could contain many more characteristics.  Each relationship is counted only once.   This is an idealized list as normal friction skin characteristics would not necessarily have direct line of sight with every single characteristics to every other characteristics, yet it does illustrate the linear increase in possible relationships.  This also follows Metcalfe's Law of network nodes where ½ N(N-1).  The network count is divided by 2 due to the fact that a node connection between two specific minutiae is only counted once, not twice.  With forensic comparison, level one and three information would be added to this level two information for a more comprehensive analysis.


Minutiae
Unique Relationships
Ratio
%
1
0


2
1


3
3
1:1

4
6
1:1.5
0.666
5
11
1:2.2
0.4545
6
15
1:2.5
0.4
7
21
1:3
0.3333
8
28
1:3.625
0.2857
9
36
1:4
0.25
10
45
1:4.5
0.2222
11
55
1:5
0.2
12
66
1:5.5
0.1818
13
78
1:6
0.1666
14
91
1:6.5
0.1538
15
105
1:7
0.1428
16
120
1.7.5
0.1333
17
136
1:8
0.125
18
153
1:8.5
0.1176
19
171
1:9
0.1111
20
190
1:9.5
0.1052
21
210
1:10
0.1
22
231
1:10.5
0.0952
23
253
1:11
0.0909
24
276
1:11.5
0.0869
25
300
1:12
0.0833
26
325
1:12.5
0.08
27
351
1:13
0.0769
28
378
1:13.5
0.074
29
406
1:14
0.0714
30
435
1:14.5
0.0689
31
465
1:15
0.0666
32
496
1:15.5
0.0645
33
528
1:16
0.0625
34
561
1:16.5
0.0606
35
595
1:17
0.0588
36
630
1:17.5
0.0571
37
666
1:18
0.0555
38
703
1:18.5
0.054
39
741
1:19
0.0526
40
780
1:19.5
0.0512
41
820
1:20
0.05
42
861
1:20.5
0.0487
43
903
1:21
0.0476
44
946
1:21.5
0.0465
45
990
1:22
0.0454
46
1035
1:22.5
0.0444
47
1081
1:23
0.0434
48
1128
1:23.5
0.0425
49
1176
1:24
0.0416
50
1225
1:24.5
0.0408
51
1275
1:25
0.04
52
1326
1:25.5
0.0392
53
1378
1:26
0.0384
54
1431
1:26.5
0.0377
55
1485
1:27
0.037
56
1540
1:27.5
0.0363
57
1596
1:28
0.0357
58
1653
1:28.5
0.035
59
1711
1:29
0.0344
60
1770
1:29.5
0.0338
61
1830
1:30
0.0333
62
1891
1:30.5
0.0327
63
1953
1:31
0.0322
64
2017
1:31.5
0.0317
65
2080
1:32
0.03125
66
2145
1:32.5
0.0307
67
2211
1:33
0.0303
68
2278
1:33.5
0.0298
69
2346
1:34
0.0294
70
2415
1:34.5
0.0289
71
2485
1:35
0.0285
72
2556
1:35.5
0.0281
73
2628
1:36
0.0277
74
2701
1:36.5
0.0273
75
2775
1:37
0.027

Craig A. Coppock Updated: 27MAR2016