Monday, December 12, 2011

Non-Specificity and “Within-Expert” Problem Solving Consistency in Forensic Science

Non-Specificity and “Within-Expert” Problem Solving Consistency in Forensic Comparison 

 

Abstract:  Repetition of analysis is subject to noise in the cognitive processes that can be minimized but not fully eliminated.  Accordingly, variability in analytical results should be expected.  This noise is the variability in the data, perceptions, knowledgebase, etc….  Noise is best mitigated with high-quality training and experience that results in high levels of awareness of noise relevant to the analysis and its negative effects on cognitive processes.  

   
Our search for hypothesis consistency when solving problems in forensic comparison, are often practically measured by known error and proficiency testing. Then there is the “Within-Expert” testing in which know problems are repeatedly presented to experts for evaluation.  Inconsistency in forensic conclusions have been a persistent topic of discussion relative to the accuracy for comparative forensic science, which includes fingerprint individualization.  Studies have shown that there is an error rate not only between-expert hypothesis, but within-expert analysis. This is where an expert faced with the very same problem arrives at a different conclusion. [1]. However, this problem is not surprising, nor can it be fully mitigated.  

An interesting point is that if the comparison is not recognized as being identical to that encountered in the past, then only the testers have a concept of identical. From the examiner’s perspective, they will have to holistically approach the problem again using their expertise and strategy, which will be different than the last time they analyzed the problem.  Even recall of experienced based detail will be slightly different.  Since the analytical processes will again be applied uniquely (as in the concept of non-specificity) there will be a real and practical probability of a new and possibly conflicting hypothesis.  At first, this sounds serious.  However, I would think this could be said of most any human endeavor and has been said of science as a whole by Thomas Kuhn in his book “The Structure of Scientific Revolutions”.  Yet here we wish to take this concept to the individual’s analytical level.  A familiar analogy could be of a car lapping a racetrack, whereas each lap is very similar but essentially different.  Even though the racetrack, the car, and the driver change very little from lap to lap, the process as a whole is different and the cognitive and physical effort applied is different with each lap.  Non-Specificity is the concept that: “Within human cognitive applications a specific cognitive information set can never be utilized more than once to effect solutions”.[2] Accordingly, the most important point is the successful solution of the problem within professional analytical expectations, not how the solution was exactly achieved.  With forensic comparison each comparison is also approached uniquely.  The original specific information will be different, and the analytical process will also be different.  Even if the difference in information available or utilized is slight, it can be sufficient in that; 

A. The examiner fails to recognize the correct solution

B. Is now sufficient for the discovery of a correct solution

C. An expanded solution is discovered where new aspects are comprehended 

 

In essence, new information can initiate a phase transition in a positive or a negative way depending on how the information is utilized and understood.  In Thomas Kuhn’s paradigms it is addressed at a higher level as: “…paradigms are observationally immensurable in that workers in different paradigms will respond in different ways to the same stimuli… they see different things when looking at the same places”.[3].  The principle of non-specificitytakes this concept down to the actual working level of a single problem, where… even the stimuli are different to some degree.  

An important point is not a reminder that humans err, but rather to be sufficiently proficient to understand the ever-changing conditions and variable noise in which we must execute our analytical applications.  We will need to maintain a strong focus on advanced training, research, and proper application of our methodologies to reduce this potential error.  In regard to the analysis stage of the forensic comparison methodology “Analysis, Comparison, Evaluation, Verification” (ACE-V) or the scientific method in general, it is very important to understand quality (noise) variables such as distortion, perspective and bias.  Like noise in general, understanding what bias is and how it affects our analytical process can help us minimize its negative influence on forensic science.  With the rapid expansion of forensic applications over the last few decades in law-enforcement and military, we need to take a closer look how to improve training relating to scientific guidelines that address process noise.  

There is an immeasurable need for continued education with advanced forensic comparison training and all levels of government, Federal, State and Local.  Academia can help us focus on specific issues and research findings, yet it is up to us to fill in the gaps with improved knowledge.  With the combination of new examiners in the field, a reduction of agency funded education and remote learning we need to be proactive and make new training solutions to maintain high levels of accuracy. 

 

Another simple analogy to help us understand this training issue is that of aircraft flights.  When jets fly fine, they draw little attention to their normal maintenance, yet when they crash... the world wants to know everything.  My goal to bring some of our "normal flying time" to light.  What frame of reference do we really work in and what significant data noise is present?

Science is a process that often utilizes estimates and probability rather than exactness and absoluteness. Mother nature absolutely does not like to provide exactness.  If we can better understand how and where we encounter such "estimation and probability issues," we can invariably improve our analytical processes.  The key is to build a better personal knowledgebase.
  The need arises from the fact that inference is a holistic process that leverages our experience-knowledgebase and that our analytical inferential data is variable from moment to moment.  New information is recognized and evaluated within a matrix of variable noise.  It is the old saying “you never step in the same river twice”, just revised; “You never utilize the same cognitive information sequence twice” as with the non-specificity principle.  There is always new information and noise in the information set as it is evaluated forward in time.  This noisy information can be relationships, degrees, insights, and of course, that information we may have forgot, overlooked, or did not recognize.  We must also realize that this logic process includes inferential logic sequences that are nonmonotonic.  Reevaluations of information sets can, and in some instances should, result in different conclusions. 

These issues do not invalidate the cognitive process specifically applied to these problems, it is simply the limitation of a human mind operating within the confines of a limited knowledgebase paired with noise.  

 

Craig A. Coppock 2011

Updated 20210219

 

1.  Itel Dror, Ph.D. And Rovert Rosentahal, Ph.D. Meta-analytically Quantifying the Reliability and Biasability of Forensic Experts

 

2.  Craig A. Coppock, Principle of Non-Specificity 2011, Academia.edu, Blog Spot, Research Gate

 

3.  Thomas Kuhn; The Structure of Scientific Revolutions, Philosophy of Science p68, Encyclopedia Britannica, App Ed. 2020

No comments: