Sunday, September 20, 2020

The Fourth Conclusion in Latent Fingerprint Comparison

The Fourth Conclusion in Latent Fingerprint Comparison

Fingerprint Individualization and Undiscovered Matches

 

            For many years latent print and 10-print examiners have attempted to reduce the comparison process to three possible conclusions.  For the most part, examiners errors have been disregarded when considering the possibilities, due to the fact that if examiners did not follow established scientific methodology, such as ACE-V, along with various scientific field guidelines, the error would not be applicable toward the methodology itself.  Essentially, the proper application of the methodology will offer the correct results and those results can be verified and supported within specific scientific models.  This is the essence of the science of friction skin individualization.  These three possible conclusions resulting from friction skin comparisons are listed below.  Is there room for a fourth?

 

                        Three Conclusions of friction skin comparisons.

1.     Individualization

2.     Exclusion  

3.     Insufficient quality and quantity of information for comparison use.

4.     ?

 

For many years we have formally ignored the 4th possibility.  Many cases of exclusion should actually be annotated with the 4th conclusion of; No Match Found.  These types of comparisons often lack key groups of information that can be used in the exclusion process.  These groups of information are often the necessary component that allows a 100% certainty for exclusion.   Proof of this fact is found in a simple analysis of the Latent Print Examiner certification test.  Many examiners are unable to offer definitive conclusions of "individualization or exclusion", even when they know that all the prints in question are identifiable!  In many cases the examiner cannot recognize all the possible individualizations.  Of course, there is a chronological constraint to the test, yet there are chronological limits on most all friction skin comparisons.  If the examiners did not know that all 15 latent prints were indeed identifiable, they would be forced, in many cases, to offer the fourth possibility of “no match found.”  Even without regards to the time limits, many examiners would not have sufficient information or knowledge to offer the option of exclusion.  Of course, with regards to the test, to do so would be in error, since the print would be not be an exclusion.  This problem of probability compounds itself when the search parameters are increased.  It is not really acceptable to assume that “no conclusion” is a correct course of action.(1)  In the case of the test, the absence of an answer implies that the print was not compared or that the print was not found.  Even the possibility of failing to commit to individualization would fall under the fourth option.  We must also consider plantar impressions, which are rarely compared, as well as third level detail.  Locating a third level detail match out of numerous exemplars, is daunting time-consuming investigation.  Clearly, the probability of accurately locating such prints low detail impressions is low.  Again, the 4th option of “no match found” would be the appropriate conclusion. 

 

 

                        Four Conclusions of friction skin comparisons.

1.     Individualization

2.     Exclusion

3.     Insufficient quality and quantity of information for comparison use.

4.     No Match Found

 

In real world examinations of 10-print and latent print impressions, the examiners do not know for sure if the print can be identified to any of the subjects provided for comparison.  Since the main aspects of the print recognition and investigation process are experience based, the fourth option illuminates the possibility, depending on the case, that qualified examiners may not always locate the impression.  This must be considered.  It is not possible to separate the examiner’s knowledgebase and investigative efficiency from the process.  Accordingly, the 4th option of no match found, cannot be considered a true error, since the probability is variable due to examiner experience, as well as, and variability in the qualitative and quantitative aspects of latent and exemplar friction skin impressions.  While individualization (friction skin recognition) can be formalized for scientific evaluation the investigative aspects involved in the recognition process is an art that leverages science at many levels.  Can there be true errors in the art of interpretation?  The recognition process starts as a basic investigation, which is analogous to a crime scene.  Information is sorted and evaluated based on experience which includes formal and non-formal training.  Points and issues of fact are recognized, analyzed and evaluated.  This new information is transferred into future applications of this and other recognition processes.  With crime scene investigation, is rarely possible to discover all the items and facts of evidence, in fact such an event would be considered highly improbable.  Furthermore, it is not always possible to know if an item of evidence remains undiscovered.  During the investigative phase the probability of undiscovered evidence prevents the investigator from adhering to a regiment of absolutes.  This probability of undiscovered evidence can be reduced by the application of a scientific methodology such as ACE-V (Analysis, Comparison, Evaluation, Verification).  Generally, the verification of “exclusions” and “undiscovered” matches can increase the accuracy of the comparison process.  However, since the difficulty of proving exclusion can be far more difficult than proving individualization, in some cases, the 4th option may be the only option.  

 

There is a fundamental difference between individualization and the fact that exclusion cannot be proven.  When done correctly and assuming sufficient information is present, individualization and exclusion can be formalized and supported with a valid scientific model.  However, evidence that has not, been discovered cannot be “supported” with a scientific model.  Thus, the 4th option is necessary.

 

Craig A. Coppock 20040130

Updated 20200920

 

Reference:

1. Craig Coppock The Science of Exclusion, Fingerprint Individualization | ACE-V | Scientific Method Blogspot.com, Acedemia.edu, ResearchGate.com

Monday, June 26, 2017

The Scientific Method: Information Theory's Foundation to the Scientific Method

-  Scientific Method -

Information Theory's Foundation to the Scientific Method

- An Outline for Research -

 

            “We look to science for answers, but scientists are driven by questions and to them an answer is merely a prelude to another question.”  - George Musser

 

Abstract: This outline highlights information theory as a probable foundation for the scientific method. The method’s process is overlaid with Claude Shannon’s 1947 ‘Communication System’ to illustrate points of equivalence. With this idea, the concept of noise is understood at the root level allowing for its consideration at all levels and steps of the method’s application. In this case we are not trying to maximize channelized information and efficiency, we are overlaying the communication system with related cognitive complexities to understand the process, timing, and information value.  

 

            The dawn of our information and scientific era can be attributed to the formally applied use of the scientific method. Prior to this key transition we operated on informal untested intuition, tradition, speculation, and guesswork to best understand our world. The scientific method is a process of discovery via feedback from an applied investigative system. Merriam-Webster’s definition of “Scientific Method”; principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses.” It is also how we think.  The words Scientific Method were first used around 1672.(1) The loosely defined method’s steps are represented as “general phases” including observations, questions, hypotheses, testing, new questions, modifications, re-testing, and theory. However, what exactly are we doing when we utilize the Scientific Method if it is loosely defined? The primary point of the method is to apply observation, logic, data, and testing for discovery of relevant and accurate information that builds our personal knowledgebase. Technically this process is applicable to the cognitive functions in general. This discovery process must be partnered with the threshing and winnowing of noise within the steps. This is our cognitive selective focus that allows us to pursuit specific information within a larger context. This general background noise is a matrix that associates with the targeted information and obscures it to various degrees within the message and within the processing of that message. Without a properly reasoned and applied analytical approach, management of the noise may prove ineffective. Thus, our analysis may prove incomplete or ineffective with relevant information undiscovered.  

Due to the extensive use and the important nature of the scientific method, a review of this generalized system [and our cognitive processes] are in order to further understand the phases and the embedded noise matrix for purposes of enhanced error mitigation and a more complete understanding of specific phases of the method. Integration of the scientific method with Claude Shannon’s 1947 “Communication System” and the evolutionary follow-on “Information Theory” are the core of this approach.  Merriam-Webster’s Definition of “information theory” is a theory that deals statistically with information, with the measurement of its content in terms of its distinguishing essential characteristics, or by the number of alternatives from which it makes a choice possible, and with the efficiency of processes of communication between humans and machines.” The words Information Theory were first use in 1948.(2)

At the surface the “Scientific Method” and “Information Theory” seem disparate when referenced to these general and incomplete definitions, yet this is not the case. “In its most general form, a computational system is nothing more than a dynamical system defined on a system of “computational states” … It may be understood in terms of a flow…”(3) and in this case we also consider the formal application vs informal application of cognitive processes. The parallels of Claude Shannon's "Communication System", and Information Theory in general, to the scientific method highlights the focused analytical processes and critical introduction of pervasive stochastic variables known as noise common to this rational behavioral system. Uniqueness, complexity and the generation of information and value within the process and our cognitive aspects must be understood in context to further combine the scientific method and information theory for improvements of the method and in artificial intelligence programming. A concept of non-specificity in replication, in that each complex analytical session and related sub-session are unique, which proves to be a limitation on process replication accuracy, are simply the effects of these chance variables on the communication process as a whole.(4) This uniqueness in the system, whereas known data, measurements, and testing, can only be approximated to various degrees, is analogous with "noise" in Shannon's "Equivocation and Channel Capacity" of his Communication System; whereas a noisy information channel prevents reconstruction of the original message or the transmitted version with certainty by any operation on the received signal.(5) This research into the foundation of the Scientific Method allows us to build a better model for improved efficiency and error mitigation and overall guidance in doing science, cost benefit analysis, or simple everyday decision making for best utility.  

It has long been known that measurements in general, can have limited value. The item or energy in question must be able to interact vis-a-vis with our various tools to be observable, else we would miss the information or event, and fail to incorporate it into our analysis. “Our knowledge of the external world cannot be divorced from the nature of the appliances with which we have obtained the knowledge.”(6) A particular point with cognitive processes is the fact that; most measurements are estimates. Estimates of probability, averages, subjective considerations, or simply educated guesses enter the cognitive processing system. The unwanted introduction of stochastic noise, as well as these relative limitations on measurements, prevents our exact knowledge or our ability to reproduce the exact methodology used. In addition, we must remember that we do not use all available information in our observations or analysis. We select or parse specific information to ease complexity, improve speed of analysis and focus our efforts on a data set or goal. However, this does not preclude our ability to reach a specific result at the end of this complex process, such as a particular value within an acceptable degree of error and within a suitable amount of time. In essence, our various measurements are consistently imperfect, yet we intuitively understand this in context. Thus, our averages, and approximates, are generally sufficient to take our cognitive process rapidly forward with reasonable accuracy and in step with the original communication system.  See Figure 1.   

A general preferred direction of inquiry rather than a random pathway is evident, even though our cognitive process uses a holistic process. The process feedback of correlated information steers the follow-on analysis as our knowledge improves. An analogy are the branches of a tree that has a general macro fractal pattern yet is unique on smaller scales. Similarly, someone experienced in a task, and its associated information, will generally perform better at this relevant analysis than would a layman.(7)  Essentially the experienced person or expert has established an improved and more relevant knowledgebase via high level of sensory input and Boolean sorting that is referenced during the analysis of a message or task.  S. Wolfram suspects only simple systems are required to store and retrieve information in the form of memory.(8) It is the relevancy of the memory itself and our rapid sorting and referencing ability that improves the efficiency and value of the analysis.

There can be infinite pathways to arrive at a correct solution and infinite pathways to arrive at incorrect solutions with our experience and knowledge helping guide the process forward with observation, data, hypothesis and testing repeated in many small, formalized logic steps; as simple questions often feed the larger goal or establish a relevant informational relationship within an overall strategy. It has been shown that stark programs utilizing concepts of cellular automata show rapidly increasing complexity using simple instructions. Furthermore, these programs highlight a mix of regular and irregular results that are impossible to predict with any certainty.  In essence; simple rules can produce highly complex behavior.(9) Thus, the relatively simple process of our cognitive problem solving within our structure of the communication system (fig. 1), its holistic and stochastic aspects, value and feedback operating within a matrix of increasing complexity are of no surprise. Again, a simple process can quickly lead to a high-level of complexity to include that from random aspects originating from both outside and within the analytical process. From a reductionist view, this incalculable complexity of cognitive problem solving is reduced to a few basic steps within a matrix that contains randomness and complexity, hence a model. A primary challenge with be to grasp the concept of information value and relativeness, and how it relates to the order or construct of information and to our evolving personal knowledgebase. 

process envisioned; Something like an inverse partial differential diffusion equation may provide a suitable outline of understanding, whereas a high level of diffusion represents a nonspecific potential holistic set of information packets that are selectively processed to a focus by iteration of the scientific method’s steps. Essentially a diverse richness of correlation in the first analytical phases of problem solving is infused with new information and correlations to keep the process evolving while increasing value of the accompanying knowledgebase. A bit of additional positive or negative information via correlations and iterations can lead to new insights and epiphanies, whereas that information is relevant and ultimately understood in context within your unique timeline. This does not happen in single simple message decoded and analyzed, but rather in a large series of rapidly processed information sets or packets that build the primary knowledgebase against a relative background of simpler automated sensory-based awareness/monitoring subroutines that use a similar rapid process. This personal knowledgebase of parallel sensory input within a cascading system series is key in allowing improved development and positive feedback of the process.  In essence, our strategic non-zero-sum cognitive efforts have produced new information we can leverage like a tree leaves and branches that consolidate information to a heavier robust limb. While this paper focuses on the base level of information comprehension, this next higher level of information processing and memory construction is still being researched.  However, work has been done in the organization of the info-metric level where logic and modeling are applied to the resulting knowledgebase of noisy imperfect information for specific results and understanding categorically.(9a) This layered approach to understanding information coincides with the complexity of the information and expanding relationships and relevance as well as how the brain organizes sensory input for long term use, however Reference Frames may be a simpler model for particular cognitive dimensions, where the information we process is stored in specific patterns that allows for both recall, prediction and problem solving within a matrix of constant learning.(9b) This paper addresses the noisy iterated information that would feed these Reference Frames and higher level logic modeling at the Receiver and Destination end of Fig. 1. 

With cognitive information processing, continuous probability distributions are perspective dependent and are associated with different value assignments.(10)  Interestingly quantum level processes, a world of probability, can be states with a simple definition of: A [quantum] process is the passage from one interaction to another. Essentially, things only change in relationship to one another.(11) This is the relativity of information and it is important to note that it affects accuracy of replication. Accordingly, exactness in cognitive analytics and replication is not technically possible, although “sufficiency” may and predominately prevails with adequate error controls and correction within these unique and complex steps of information processing. Ultimately, our investigative observations, mental organization and storage of information, and information parsing is only leveraging a limited, yet noisy, selection of information available. That is the point of the process, to shorten the pathway to the solution if it exists, by using good data and analytics combined with purposeful error reduction. Other keys are to have storage to reference and have experience in the relevant data sets or messages.  Essentially, this is our recall of relevant knowledge efficiently done. 


 Figure 1    The Scientific Method combined with Shannon's original Communication System.(12) At this level we can understand cognitive processes as a rapid series of information sets; messages sent, analyzed, and leveraged as iteration and/or actions. This nonlinear process is primarily informal at the individual message level, yet formal at the higher logic and goal level. In essence each question likely consists of many smaller simpler sets of rapidly processes informational messages with the output being the sum of variable levels of completed iterations.  


Information theory, the evolution of the original Communication System, covers the fundamentals of information transmission and subsequent processing in which the scientific method protocols reside. A premise of this paper is that there is no fundamental difference between having an idea and acting on it or transmitting an idea or message to be acted on, as both are noisy signals processed in relative time. Depending on the action, each can be considered within information theory, as a component, or repetition of the system or subsystems as needed to achieve desired sufficiency of information. Each phase of the communication system represents the information potential. The theory is the measure and limits of performance of that system with considerations for stochastic variables, creation of new information and system design. The complexity, uniqueness, and digitization difficulties of the cognitive communication process prevents its easy and accurate replication. The communication diagram in figure 1 illustrates the array and role of the primary components involved and how error is an integrated factor throughout the system known as noise.  

 

Shannon's original ideas, with performance limits and error correction potential within a communication transmission, wholly apply to the scientific method and its analogs, yet practically from a macro and complex perspective in that we don’t realize the smaller steps of the process. Although not specifically addressed by Communication Theory, the output of meaning or relative value of the information transmitted and processed also depends on these fundamentals. Output is neither correct or incorrect, but rather data old and new that must be understood in context within noise and probability considerations, or that output may need of further analysis. This river of output information in time is our modified perceptions, recognition, and awareness, it is our experience database. “As is obvious, we need hard data to give tight probabilities.  Nevertheless, we can give reasonable, albeit broad estimates... when analyzing data intuitively.(13) Interestingly, points in this learning system should be roughly calculable using Game Theory’s premise of utility and probability frequency when applied to the cascade of information based iteration within the communication system even with ideas imagined.(14)  Our goal here is not efficiency of the message, but efficiency in the message stream and its value leveraged forward.  Inside the Scientific Method’s phases new information is constantly being created, whether supportive or otherwise as probabilities, are intuitively understood, compared, or tested against the established knowledgebase. If we need more information, we simply keep gathering more data until we have what we need or want. We recognize a person or object; do we want or need to know more? While we may have enough information to understand, with a high degree of accuracy, who or what we are evaluating, it will take even more time and more information processing to move beyond this simple but important threshold.  This concept of useable vs ideal is summed up here; “In many heuristic search problems, the exponential growth of the search time with the problem size forces one to accept a satisfactory answer rather than an optimal one.” Essentially a fixed timeframe limits the search for the optimal.(13a)  While this is applicable to complex computational problems, we note that it’s also relevant to the foundational communication theory in discussion here, in that each signal, passing though the communication process, will be influenced by noise uniquely.  Efficiency and general volumes of information does not allow time for optimal results.  Fortunately, “sufficient” becomes ideal in this environment of expeditive multi-channel inputs and is the most correct answer amid this volume\time (v\t) constraint as only it allows the adequate function of the process itself.  

The scientific method’s information processing stages are a series of sub-routines within the analytics phase of communication theory, yet these formal processes use the overall communication system in an iterative series as new correlations and relationships, from general to specific, are recognized, discovered, and reevaluated. These iterations and recursions of the system vary in degree of formality, detail, and completeness of analytical functions. See figure 3. The brain is generally a good filter and organizer of specifically evaluated information and can quickly modify a series of thoughts or correlations into a new problem to be reviewed or solved.  The general knowledgebase builds with trial and error using this process as information correlation progresses. With sufficient time and processing the initial problem may be better understood or solved.  In the various iterations, this initial question always plays a relative part in subsequent information analytics until it has been modified into a new problem, in essence an evolution of the information. 

We must consider the whole system in all its iterations and recursions, for effective applications of error mitigation. Mitigation can be initially proactive with idealized communication channels chosen based on inherent accuracy along with specific protocols for its analysis. Examples of such are working with information that is firsthand, vs second or third level hearsay. Another factor may be the type of measurement tools utilized or perhaps a chosen degree of rounding. This tradeoff is generally reliable information vs. unreliable noisy information.  Simple questions may suffice with lower quality data, while more specific inquires would require improved data better processed via a more formal manner. Within this context, a scientific problem is defined, packaged, and disseminated in a transmission or idea. This transmission must then be received and analyzed into meaning within established parameters and tolerances in a series of evolving iterations. Hypotheses are generated and tested as the system repeats itself in varying degrees of formality, complexity, interrelationships, accuracy, and completeness. See Figure 2.  It is here in this feedback process of increasing returns that the concept of “Observation”, the first tradition step of the Scientific Method, is found. Thus, observation is part of an ever-present cycle of awareness and thought that creates the problem to be processed within various degrees of formality and completeness. “Probability has often been visualized as a subjective concept more or less in the nature of estimation. Since we [in Game Theory] propose to use it in constructing an individual, numerical estimation of utility, the above view of probability would not serve our purpose.”(15) Thus, we can now visualize the distinction between an informal approach of subjective estimation in which we continuously and rapidly evaluate and leverage information cognitively and a more formalized mathematical approach which helps us build our models and proofs from an external perspective. Figure 2 is both informal and formal.    



Figure 2     The scientific method primary phases showing input and outputs with iteration and recursion potential within a matrix of noise. The individual message is simple with the rapid iteration based on previous analysis creating more complex informational relationships that can solve larger problems in time.

The scientific method is a dynamic process where subroutines of the larger effort can vary from very simple to massively complex functions to correlate information and related aspects of a question and discover new relationships, yet each individual message of this larger effort is simpler and easier to process rapidly. The value of information scales with processed information volume. Gigabits of processed information have more potential value than single bits.  Here natural stochastic fractals or self-affined fractals may be an analog of the scientific method construct that is no longer a specific and obvious format within the greater investigation. In analogy, fractal roughness being the noisy grain of thought such as awareness of concepts, associations and details representing a finite process within unbounded potential.  These repeating subroutines and process iterations of information processing over our neural networks are unique, informative and add to the collective knowledge, thus advance the problem-solving process in time.  See figures 3 and 4.  The processing can follow a general concept of multitasking where serial and phased parallel processing of multiple inputs and ideas can interchange as new information is quickly correlated with another information set and cross referenced with other sets. However, we must be cognizant of our limited ability to focus our attention for problem solving. Multitasking can be thought of as having multiple points of inquiry going, but not all will receive the same degree of attention at the same time. A point must be made that certain macro degrees of multitasking, and distraction, is counter-productive with the introduction of additional noise, reduced focus, and increased complexity. This potential of reduced efficiency is a main point in the applied efforts of the scientific method.  

The verification phase of the scientific method is a step or critical component of the method... that reevaluates specifics observed in the previous analytical processes. This particular phase is simply a component of a communication iteration and is also subject to the introduction of new stochastic variables, thus the aspect of the complex stochastic process applies to each application and sub-applications of the message being analyzed. Note that “hypothesis” is included in the last phase of the figure 1 diagram, as critical thinking/analysis consists of natural fractal, or multifractal temporal aperiodic iterations of the process analytics thus, we must understand that a hypothesis must follow some degree of formal and informally applied prior analytics. Again, it is critical to note that stochastic variables can be introduced anywhere in the system, and they continually challenge the system's effectiveness, specific reasoning and testing, and reproducibility. When properly applied, error correction can minimize the negative effects of these noise variables. How correction is applied to analytical analysis depends on the degree of complexity of the overall problem and the level of analysis needed to sufficiently complete the task, where “sufficient” can be a degree of completeness and accuracy within time constraints.  

 

Reproducibility of the scientific method hinges on error mitigation and sharing of information, which is control of information and mitigation of the noise and is a major concern in all aspects of science.  "...researchers cannot control for an unknown variable." Noise in the system must be considered and Information Theory allows for relevant levels of error correction of this noise and the problem and nuances must be defined or understood.(16)  Does the application of error mitigation actually reduce the overall level of information in a positive manner, and is not this process the entire point of rendering a communication more effectively in light of pervasive stochastic noise? "Every type of constraint, every additional condition imposed on the possible freedom of choice immediately results in a decrease of information."(17) The correct bits of information must be parsed out in the analytical process. "...Although the path of a particular orbit [sphere of activity] is unpredictable, chaotic systems nevertheless possess statistical regularities."(18) These regularities are the meaningful and logical pathways of cognitive deduction and induction formulated on an initial information condition called a problem or idea.  

 

Error mitigation studies and analysis to improve the scientific method need to be focused on these information system aspects and variables. "A fundamental principle of information theory... is that predictability reduces entropy."(19) Reality generally runs independently of our observations and nonlinear imagination, yet our observations can guide our coordinated relative actions within the process to influence reality locally with discovered information and in some cases with accurate prediction. The effects of this discovery process can scale with further information sharing and follow-on actions. In reference to this scale; as there is no particular length as which the property of the process disappears and it can extend over a wide range of length scales, it can be defined as a power law.(20)  Figure 3 highlights this nonlogarithmic variable scale in complexity and time.  

 

With the partial mitigation of noise, uncertainty is reduced, yet the reduction of entropy itself can be considered relatively insignificant for general considerations of idea or message analytics and development within the realm of the Scientific Method. The larger scale effects, such as general noise management, information correlation and sharing are primary. This is where the Scientific Method resides. “There are parallels of information theory and statistical thermodynamics where macroscopic systems structure can be outlined while the microscopic details remain unknown.”(21) Even with these limitations, an improved understanding of noise effects within the Scientific Method is the primary goal; which is expected to improve process accuracy.  

 

The results of such a continuous communication channel are probability based with the output dependent on the input, to include noise.(22) With cognitive functions and protocols, the various system phases, including the analytical phase, can be configured and regulated to promote error correction within specific temporal and format constraints. This correction can take multiple forms such as research, protocol enhancement, specific and general analysis, testing, experience, training, monitoring, documentation, and other quality control measures that account for dynamics of nonlinear and compound geometric fractal component relationships. The temporal and noise constraints on the collection of iterative deductive and inductive cognitive processes are arbitrary limits that coincide with expectations of reasonable or sufficient accuracy within a context of timely and affordable research whether formal or informal.  

 

New research in the scientific method’s analytical phase attempts to further the understanding of the general cognitive process, including the phase transitions of information, such as common recognition and emergence. Induction is the study of how knowledge is modified through its use.(23)  Experience is that modified knowledge. “Perception and recognition do not appear to be unitary phenomena but are manifest in many guises. This problem is one of the core issues in cognitive neuroscience.”(24) Coupled with our new understanding of noise within the communication system, we must attempt to understand, and take into account, the methodology’s limits to include the quality of the information and our ability to properly correlate that information and its relative value.  
 
In the analytical phase, “Emergent cognitive structures appear to arise abruptly from prior activity of the system. The resulting structures reorganize the cognitive system’s interaction with the environment. Thus, the emergence of structure is a radical change in functionality, rather than quantitative improvements. As such, emergence leaves gaps that cognitive science must find a way to bridge.” This emergence, or improvement of information, is a driving force in analysis within the scientific method.(25) Emergence, a complex version of recognition, is a concept that often happens in a moment, yet it is realized that not all the information available on a subject needs to be analyzed into order to understand what an item or idea is (recognition), or how it fully correlates within a solution of a problem.(26) Yet, a relevant connection, or very important set of connections within the information have been made. This improved knowledge supports the investigative process and in turn creates new questions. The frequency of organization from small correlative ideas to ground-breaking epiphanies likely follow power laws as related to their complexity and frequency.  The human brain has the capacity to recognize items when only very little information is available, especially in context.  This could be just a moment of a fuzzy single sensor input, a sound, a pattern, or a blurry bit of an outline of a person.  Again, we don’t need all the available information or even a lot of it. Just small portions of information may work just fine in our comprehension efforts.  This relatively low information threshold requirement speeds up the human real-time cognitive processing considerably. This is our normal state of operation with variable quality information feeds supporting our continuous informal activities. 
 
With proper analysis only a fraction of available information may be needed to discover or recognize a fact on a formal or informal level. Simple recognition tasks, such as recognizing [matching] a face can be accomplished with algorithms, yet the real world we also find the human application of recognition to contain a very high degree of complexity due to a rich field of associated contextual information such as ‘what is a face’, who would be expected at this time and place… and understand the wide range of noise in context. Fortunately for human cognition, much of the core of the recognition [and emergence] process is managed by the automated subconscious via our knowledgebase of common and easily recognized associations. “The Russian physiologist Nikolai Bernstein (1896-1966) proposed an early solution (to the problem of infinite degrees of complexity).  One of his chief insights was to define the problem of coordinated action, which is analogous to the cognitive process at issue here. The problem is that of mastering the many redundant degrees of freedom in a movement; that is, of reducing the number of independent variables to be controlled. For Bernstein, the large number of potential degrees of freedom precluded the possibility that each is controlled individually at every point in time.”(27) The same can be said of how the brain processes information. How can recognition of a face or fingerprint impression be affected within the context of excess and variable information?  Recognition is based on the evaluation of information and informational relationships (linkages) via default hierarchies and relevancy. This process can inflate with a cascade of information to a point when we have sufficient information to make recognition on a basic level or emergence on a larger scale. That is, we understand something simple or complex in context with other information.(28)  In the case of a face, we already may have patterns in memory to compare to, thus we don’t need to learn a face by complex analysis, just match it. Recognition is a fundamental cognitive inductive process that we use constantly to understand the world around us. We leverage its organizing and quick parsing potential to efficiently forward our problem solving and situational awareness. “Induction is the inferential processes that expand knowledge in the face of uncertainty.”(29)  “It is worth noting that the issue of constraints arises not only with respect to inductive inference but with respect to deductive inference as well. Deduction is typically distinguished from induction by the fact that only the former is the truth of an inference guaranteed by the truth of the premise on which it is based…”(30)  

 “Emergent structure is not limited to the initial insight into a problem. Even when an initial strategy is successful, continued experience with a problem can led to more efficient strategies for arriving at correct solutions. This type of strategy change reflects a restructuring of the representation of the problem, a fundamentally different way of approaching the problem, rather than a quantitative improvement in the existing strategy”.(31) Iteration of the scientific method (figure 3), allows for continued investigation into a complex problem. This large amount of analyzed data contributes to general recognition, emergence, and builds our knowledgebase.  This process in generally, is not just a single simple message of limited value to be analyzed but a series of small related and interrelated messages, in the form of new relevant and related questions, that can network into the solution of a specific problem similar to Metcalfe’s Law where growth [of data] is squared, less the network’s nodes themselves or; n(n−1)/2, yet perhaps better stated with Odlyzko and Tilly’s Law of; n log(n) in overall data structure, yet with Game Theory’s decision based calculations within a notion of utility.(32) This utility would be molded by the evolving dynamic knowledgebase. 

This network of interrelated information of varying value, will have many dead ends or clade like branches where no further analytics are applied at that time, but may be useful in the future as the network relationships continue to be investigated within the changing compilation of information. The value of the network is not just unique connections in time, but rather in its applied speed, its holistic nature, and the partnerships or the square of the network. In reference to a particular point on the network, the analytical output can be the halting of the process or message in favor of a new or modified idea on a branch of the network or it can build new extensions that leverage the strengths of the process and of the network itself. This is a cognitive advantage of the boundless human imagination operating holistically and, in many cases, expertly, as compared to artificial intelligence running fixed or variable algorithms. This network is functional under limitations of information availability, analytical ability, time, and chance.  All of which manifest as discovery issues within a matrix of noise, noted in fig 1. Fig 3 shows the process that would operate within such a network of potentially available and interrelated information that builds our knowledgebase a piece at a time. Key points are that informal applications of the method vary greatly in level of completion, speed, and value. The cumulative effects of this iterative process led us to more meaningful correlations and understanding. 


Figure 3.   Various sequential iterations of the scientific method applied over time. The degree of complexity of the specific iterations and recursions are simply illustrated here by the varying size of each sub-process within an overarching goal that builds our knowledgebase. Each iteration provides a degree of information, cross-shared, networked, and relevant to previous analytics, and each recursion moves the overall system forward. The various passes through the communication process (fig. 1) varies in degree of complexity and in required time, here represented by the size of the action. This advancement of knowledge contributes as a catalyst for further recognition/emergence and solutions. Some data, perhaps the bulk of what we analyze is easily recognizable and quickly processed, where other data sets/messages take longer.  

Newly correlated thoughts build on this existing knowledge and ideas/hypotheses generate new problems with new input with the resulting output feeding the overall system. Essentially, complex ideas and emergent knowledge are the cumulative memorized effects of numerous informal and formal problems, seemingly parallel processed, but are primarily processed in a rapid series of small parts with reference to previous processing completeness and value, whereas the various sensory inputs are parallel and must be processed areas of the brain in different degrees of consciousness. Here our brain can parse information’s value of our sensory inputs by looking for values out of normal, or values that are considerably different than expected when compared to previous relevant inputs. This discretion allows a high level of rapid processing and effective sensory multi-tasking. Fragments of information by themselves are like a single molecule of water and have limited value to an idea. However, like a dynamic flow of water the overall information collection and processing ultimately provides relationships, value, meaning, and contributes to new insights via emergence.       

 
               The accuracy of an analytical recognition process product is dependent on the total information available and how that information is understood within the relevant noise matrix.  Often the information analysis is simply a rough informal probability estimate made in a very brief time. These rapid series of cognitive events build the knowledgebase on the goal of the investigation. This is necessary to speed up the cognitive process in real-time. Varying space and time connections within a functioning network can bridge information gaps at high speed as cognition is supplemented with modern communications linking a higher density of information, with light-speed being the limiting factor. Thus, information network development can be rapid on a local scale, yet slower on a cosmic scale as time lags increase and efficiency decreases on network connections.
 
In further regards to scaling, there is information that is more relevant locally. It may be a relative location estimate such as here or there, or up vs. down, left side or right side, which has no intrinsic value outside our relative interaction with the universe. We should also consider syntax such as signs, and other forms of cultural relativity when evaluating noise with our analytics.  Signs or words used as a place holder for other things, where the reader needs the encryption key to make sense of it, such as the word function, where you must not only understand that written language you must also understand the concept of the word function in this context as ‘function’ can have many different applications, so when you see its sign, the word function, you can understand its meaning in context. Language itself is likely one of the most common contributors of noise in international information sharing and sharing between expertise.  
 
A general form of noise increase in a message can simply be due to the type or mode of messaging. As noted with syntax issues verbal communication is generally noisier as compared to writing as certain timely error correction capacity is lost in transmission. Interpretation can be improved by utilizing precise syntax, as compared to speech variables including accent, enunciation, loudness, clarity, and comprehension. This includes message reception issues such as hearing ability, ambient background noise and comprehension. When repeating a message, its duplication accuracy is also highly dependent on the mode/means, especially with a human proxy rather than a recorded message. We all understand and remember imperfectly and differently, and incompleteness is noted as form of noise in fig 1. Some information we don’t commit to memory, while other people may... albeit from a required different perspective, or perhaps even fabricated via intentions or guesswork. It takes a quality investigation to help make relative sense of the information at hand and discovered. Technical language helps reduce some of this inherent noise.  The scientific community has made considerable progress in fine-tuning technical language, almost to the level that a layman may not understand simple papers written in a technical language yet could grasp core concepts of a paper rewritten in a more common, less accurate and less efficient syntax.
 
            This rather rare and unnatural technical level of communication and analytics is not common in our everyday experience and inversely correlates to a theory of the Bayesian brain, a general framework that proposes that the brain makes probabilistic inferences about the work based on an internal model of rapid best guesses interpreting our changing interaction with the world to specifically include information. These interpretations, in line with Bayesian statistics which quantify the probability of an event based on relevant information gleaned from prior experiences rather than waiting for sensory information to drive cognition, the brain is always constructing hypotheses in regards to its interactions and data collected, to include filling in missing data and forming predictions.(33) It is important to note that with normal cognitive functions we interact with averages, or collections of information rather than signal bits.  This type of processing increases speed and efficiency and allows us to navigate the world sufficiently accurate, and we can slow down and increase accuracy when needed for more formal and specialized tasks. There are also parallels to other physical processes, “… we interact with averages, such as with heat, rather than with an individual atom.”(34) Generally, ‘complexity’ will need to be simplified into meaningful components and trends to be analyzed effectively.  
 
Within this process, we understand that experience is necessary to understand information and understand that information in context with other information, so its value to problem solving can be leveraged, and leveraged timely. This “context” is also a general understanding of information’s value such as with probability or relationships. The accumulation of experience-based memory, including information gathered from other minor issue related recognitions and non-recognitions, and nulls further promotes efficiency in the process by keeping the analytics focused.  Increasing positive returns on our understanding of analyzed information drives the recognition process specifically and emergence in general. These returns are the information understood contextually, and meaningfully, within multifarious constraints that could otherwise hinder analytical results. The “eureka moment”, the moment of positive recognition, is founded on this relationship and in our background cognitive efforts superimposed and integrated with current thought. This special moment seems to be a sort of knowledge centric exceptional point as a critical phase transition happened within the information, to include its context, and its evaluation and comprehension from a personal knowledgebase perspective. Interestingly, recognition happens imperfectly, yet rather efficiently as a lack of need for perfection saves energy.  According to Henri Poincare´, “Discovery consists precisely in not constructing useless combinations, but in constructing those that are useful, which are an infinitely small minority.  Discovery is discernment, selection.”(34a) Poincare´ was referring to mathematical discovery, yet we see that the same process is at work here.  A high level of information filtering, resulting from our investigative process iterations, allows the potential of discovery to occur if our knowledgebase is sufficient.  Discovery or the “eureka moment” is recognized differently by each person who may be working this problem as non-specificity (uniqueness of process) is inherent in the method.  
 
Cognitive multitasking is the chronological layering of multiple lines of inquiry.(35)  When the task at hand is larger and more complex the relevant idea generation is called emergence. This emergence or phase transition of information frequently punctuates our holistic methodical processing of information, allowing new directions and strategies in our analytics with a new and valuable output priming the next iteration. A key to understanding the series processing model is to understand the various states of informal problem processing in conjunction with completeness of those problems with notional consideration of efficiency in those tasks.
 

If consciousness and its complex recognition process are not computable, and certainty is always, to some degree, pervasive and incomplete, then ultimately, we must focus on the simple task of maintaining sufficient accuracy and sufficient efficiency in our results via applied error reduction and building our knowledgebase. Claude Shannon’s original idea shows how the complex cognitive process can generally fit into the highly specific functions of communication theory and it is now thought this process is rather simple despite the complex appearance of the outcomes “…there can be vastly more to the behavior of a system than one could ever foresee just by looking at its underlying rules.”(36) Understanding the introduction of error, error mitigation and the comprehension of formal and informal frequent iterations of the process are the key. Specifically, scientific inquiry is purposed to reduce uncertainty and organize information in context, or "measure the amount of information by the amount of change of uncertainty”.(37) Further development of these concepts will enhance the value and usefulness of the scientific method.  

 

Complexity is the gap that has kept “human thought and relative information value” traditionally out of Information Theory. Claude Shannon and subsequent researchers have purposefully excluded cognitive processes from the mathematical aspect of communication theory to ensure the theory was not overcome by complexity and nuances common to these processes. “…we are in no position to investigate the process of thought, and we cannot, for the moment, introduce into our theory any element involving the human value of the information. This elimination of the human element is a very serious limitation, but this is the price we have so far had to pay for being able to set up this body of scientific knowledge.”(38) There is another aspect in this complexity that effectively hides information during processing. In the brain, our neural activity can be monitored, yet the relative information itself- has little value until it is shared in context and free to interact with other information. The information, in thought, can be consider potential change when unshared, and change when shared.  Accordingly, we ask the question; what is the value of the information? It must be a relative value and unique to the interpreter, a value that produces work [positive and negative feedback] in the process.  Furthermore, it is important to understand that a single bit can be an icon or tag for an infinite amount of other information.  

 

The practical aspects of controlling and understanding the complexity to any degree of accuracy are too great or impossible computationally, yet the need to understand the effects of stochastic variables within the understanding, testing and replication of science that creates value, specifically within the scientific method, is too great to ignore. Complexity, chaos, randomness and natural fractal model research and integration can further benefit this conceptual step into Information Theory. The research and integration of fractals, power laws, and complexity into our basic understanding of an Information Theory and the Scientific Method, is expected to further our insights into the sources of noise and the mitigation of noise.  Self-affine fractals phase transitions may hold structural insight into the multitude of unique iterations within, and of, the communication system’s messages where relevant chosen insights and discovery are runout in various degrees of depth and breadth as needed to advance the goal of the inquiry or message. 

A key for method development progress will be to improve our knowledge relating to the differences of “relevant contextual value of information" as opposed to some other intrinsic mathematical value, as the value would be relevant to one person to some degree and non-relevant to another, but not irrelevant with respect to the process goals. Our limitation of inquiry is evident in how successfully we deal with the noise or uniqueness of the process and how much experience we have relevant to the topics and the system aspects.  Here is some insight; "Chaotic systems also possess statistical regularities. Two orbits with slightly different initial conditions will follow very different trajectories, however, histograms constructed from these two trajectories will be very similar, and thus their average properties will also be very similar. A chaotic dynamical system thus combines elements of order and disorder, predictability, and unpredictability."(39) Our cognitive efforts within the scientific method will also have rough recognizable patterns. These patterns follow the basic premise of problem solving, yet are unique in their details and “…that information is always relative to a precise question and to prior information".(40)

Running error correction in everyday noisy speech communications, is most notable with people experiencing mild hearing loss, where that person cannot hear sufficiently to be able to run normal real-time error correction that fills in small gaps of noisy messages. Hearing loss results in a significant introduction of several types of noise to a common message that can effectively obscure even simple messages. The messages are repeatedly asked to be re-sent... "What did you say?" This simple level error correction effect is also notable with non-vocally read messages where words can be re-read and misspelled words can often, and very easily, be corrected by the reader without pause. Some research testing has modified every single word of a paragraph with slight spelling errors, yet a person can easily read the message as not all potential information is needed to understand the information and understand it in context. This is a primary premise of communication theory as well of the core components of cognitive recognition and emergence. 

Figure 4 illustrates the basic steps of the system iterations and recursion complexity with relation to time. Plotted as a graph, efforts within time, are visualized as a variable wave, a fractal.  Each step is unique in its information and execution, yet there can be many ways to find a correct answer within the constraints of noise as positive and negative results can forward the process in focus and organization. In addition, many other cognitive lines of inquiry are chronographically superimposed on this model, hence the holistic and networking nature of cognitive problem solving within a framework of desired focus and specificity. These efforts blend to various degrees with information sharing due to our awareness, yet as we prioritize our informational goals in real-time, we parse the data and noise to benefit that goal or line of inquiry at that moment. Eureka epiphanies and points of recognition however significant, are the solutions of those continually evolving lines of inquiry that had remain unsolved, yet we continually evaluate and contemplate at various degrees of formality over longer periods of time.  New and/or improved information, perspectives, and new questions can provide valuable information and correlations to older lines of investigation. Regardless of how much art we apply to our problem-solving process, we must remember that our process creativity also requires the recognition of missing information, that is we understand the gaps in our knowledge relevant to our task. This helps direct sub-inquiries of the investigation.  

 

Underlying all this cognitive processing are countless successive loops of the scientific method outlined in figure 3, with each varying in degree of formality, focus, complexity, and completeness with specific goal vectoring through time as our information goals and priorities vary as more information is analyzed and hypothesis drawn. In this sense messages are broken down into very small pieces to be processed. It is our knowledgebase which benefits and allows us to answer complex questions.  

Figure 4    A graph of what a sequence of focused system iterations and recursions and outputs may look like.  Outputs provide additional new information and inputs as the system steps are cognitively processed with relation to information organization and time.  In the chart, new solutions, ideas, recognition, or other information connections do not require all the available information, thus no hard spikes in effort are seen on the X axis, simply another processing step in time with feedback driving the system/knowledgebase forward in a series of rapid and informal steps in time.

          Information Theory is the underlying principle building our knowledgebase, yet in a highly simplified state it fails to illuminate the detail needed to properly understand the complexities, aspects of randomness, and value of cognitive functions. In practice, it would be reasonable to assume research into cognitive functions is easier to approach at the synaptic level as this better aligns with the original mathematical theory of communication's focus by C. E. Shannon, while this approach is insufficient to cover the details outlined in this paper for purposes of problem solving and error mitigation, we note that the cognitive processes do operate cumulatively in very small sets.

To this modern date, the very important scientific method, and the effects of noise has been incomplete and ill-defined hindering our drive for innovation and efficiency. However, realigning the core of the scientific method with new discoveries in cognitive and information sciences will modernize our system of scientific discovery, perhaps taking us into a new dimension of understanding and innovation, to include artificial intelligence, machine learning and human communication. Being conscious is only a small part of this equation.  It is about how well we understand, manage, and share that information. Hence, a conscious computer, if possible, is simply likely to be… a more valuable computer. The wide-angle and wide-band sensory input of human consciousness is constantly monitoring, assessing, and parsing information as our awareness, ideas, goals, needs, and priorities track forward in time utilizing positive and negative feedback from the iterative process, yet we must do so within a field of noise, complexity, infinities, and randomness.  

Craig A. Coppock 20160326  

Updated: 20240108

 

Note 1:

A question outstanding is the precise definition, role, degree, and relationship of randomness in information analysis and in relationship to the concept of information itself. Randomness is a key to understanding information and any relative value we may assign. As there are different sets of ‘infinity’, there are different sets, or according to Gregory Chaitin, degrees of ‘random’ such as sufficient equivalencies, practical equivalencies, and qualities that; better understood in context, would allow us to improve our research and knowledge of the concept of ‘information’ or change our perception of randomness with the evaluation of new information. A computer-generated random string, sufficiently extended [to infinity], would be impossible to differentiate or disprove as compared with a naturally random string in any reasonable amount of time?(41) Andrei Kolmogorov has shown us that a random string is defined by itself rather than a simple program, yet where does a degree of random properly fit?  There’s a ‘perfect’ random, a ‘more’ random, an ‘effective’ random, yet how can we properly apply ‘sufficiently’ random? A few bits of information meaningful to one person can be the equivalence of a random string to another person even if we can calculate a probability of its potential randomness within any necessary scope of compressibility. In regards to nature’s randomness, according to S. Wolfram “what we mean is that none of our standard methods of analysis have succeeded in finding regularities in it.”(42) Wolfram’s definition of random would be; “…whenever there is essentially no simple program that can succeed in detecting regularities in it.”(43) Ultimately, it seems a good definition and understanding of information and its components is our modern analogy to understand gravity. We see it everywhere yet struggle to pin it down.  

 

Note 2: 

The difficulties of definition and organization of a central information model outside of processing of messages and signals can have analogies in the quantum world. “Fortunately, there are a wealth of interpretations of quantum mechanics, and almost all of them have to do with what happens to the wave function upon measurement. Take a particle’s position. Before measurement, we can only talk in terms of the probabilities of, say, finding the particle somewhere. Upon measurement, the particle assumes a definite location. In the Copenhagen interpretation, measurement causes the wave function to collapse, and we cannot talk of properties, such as a particle’s position, before collapse. Some physicists view the Copenhagen interpretation as an argument that properties are not real until measured.  This form of “anti-realism” was anathema to Einstein, as it is to some quantum physicists today. And so is the notion of a measurement causing the collapse of the wave function, particularly because the Copenhagen interpretation is unclear about exactly what constitutes a measurement.”(44) This is like relative value or potential of information states prior to recognizing specific relationships and more complex eureka moments that create valuable new insights from bits of collected and analyzed information. Regardless of the probability of an information set arriving at a particular space-time, its value is determined by the analysis and relevance to the decoder.

 

Note 3:

            Specific Information value is a non-homogeneous negentropy to an unwise and uncaring universe that only knows such things as the energy of a photon or a flipping coin rather than a coin with differentiation of sides that build [us] a frame of probability and meaning. It is that little bit of energy organization that resists the force of randomization and distribution that allows us to understand.  This process creates a unique yet incomplete archive of unfolding events that has value only to those who care to look. How this trivial micro component of informational energy is converted is relative. It can be lost, transferred, transformed as heat, or further collected, conserved, and accumulated in common low entropy loose relationships and perhaps within ideas and epiphanies being like a phase transition where rarefied information synchronizes coherently over a network due to the function of a powered non equilibrium knowledgebase augmented by the force-multiplier language. It is said we are insignificant to our galaxy or universe, yet we ceaselessly push to accumulate knowledge for our daily use and to satisfy our insatiable curiosity. Thus, entropy it seems, is a process intent on universal destruction, yet with very faint highlights of inset placeholders of energy that which we call information value.  

 

What energy relationship is required to continue this organization considering our requirement to effectively parse value out of a stream of constant informational inputs? It seems the maximum leveraging of information requires an infinite amount of energy. If so, we would understand that information’s full value would forever go unrealized, and we would also know that we never need perfect or perfectly shared information and informational “fragments” are not only just fine but normal and efficiently ideal for our familiar real-time streaming and organizational operations. 

 

--------------

References:

1. Merriam-Webster Online Dictionary; “Scientific Method” https://www.merriam-webster.com/dictionary/scientific%20method   2017.

2. Merriam-Webster Online Dictionary; “Information Theory

https://www.merriam-webster.com/dictionary/information%20theory  2017.

3 J. Probaska, University of Leipzig, Peter F Stadler, University of Leipzig, Manfred Laubichler, Arizona State University, The Energetics of Computing in Life and Machines; Chap: How and What Does a Biological System Compute? Sonja 2019 The Santa Fe Institute Press p.169

4. Craig Coppock, Principle of Non-Specificity within general cognitive applications: 2011 “A specific cognitive information set can never be utilized more than once to effect solutions.” Applicable to general scientific method applications/Effects reproducibility of processes to various degrees.  Analogous to C. E. Shannon’s “noise” in communication theory, academia.edu, researchgate.com

5. Claude E. Shannon, Warren Weaver, The Mathematical Theory of Communication, Equivocation of Channel Capacity, Chap 2, 12.

6. E. A. Eddington, The Nature of the Physical World, 1930 Macmillan, NY p.154

7. Craig A. Coppock, Recursive Processes in Forensic Pattern Comparison, 2011 http://fingerprintindividualization.blogspot.com

8. Stephen Wolfram; A New Kind of Science, Wolfram Media, IL 2002, p.622

9. Stephen Wolfram; A New Kind of Science, Wolfram Media, IL 2002, p.31

9a. Amos Golan, Foundations of Info-Metrics, Modeling, Inference, and Imperfect Information, Oxford University Press, NY 2018

9b. Jeff Hawkins; A Thousand Brains, A New Theory of Intelligence, Basic Books, NY 202110. Tom Carter, An introduction to information theory and entropy, Complex Systems Summer School, Santa Fe, 6-2011 p. 25

11. Carlo Rovelli, The Journey to Quantum Gravity, 2017, Penguin UK, p. 116, 158

12. Claude E. Shannon, Warren Weaver, The Mathematical Theory of Communication, 1949 Illini Books Ed 1963 Chap. 2.1

13. Pradeep Mutalik, When Probability Meets Real Life; Insights Puzzle: Solutions March 2, 2018

13a. Tad Hogg, Paper The Dynamics of Complex Computational Systems; Complexity, Entropy & The Physics of Information, 2023 Vol. 1 SFI Press, Santa Fe p.317

14. John von Neumann, Oskar Morgenstern, Theory of Games and Economic Behavior, 6th ed, 2004 Princeton University Press, New Jersey, p18-20

15. John von Neumann, Oskar Morgenstern, Theory of Games and Economic Behavior, 6th ed, 2004 Princeton University Press, New Jersey, p18-19

16. Alyssa Ward, Thomas O. Baldwin, Parker B. Antin, Silver Lining to Reproducibility, Nature 532, issue 7598

17.  Leon Brillouin, Science and Information Theory, 2nd ed, 2013 Dover Books on Physics, NY, chap.1, 6

18.  John Daugman, Information Theory and the IrisCode, IEEE Transactions on Information Forensics and Security, Vol. 11, No. 2

19. Phillip K Poon, Outline of "Elements of Information Theory" 2009 Definition of a communication channel. P. 2 pfd

20. Yaneer Bar-Yam, Dynamics of Complex Systems; Studies in Nonlinearity.  Addison-Wesley  Reading MA, 1997 p. 267

21. Leon Brillouin, Scientific Uncertainty and Information.  Academic Press, NY 1964 p11

22. Phillip K Poon, Outline of "Elements of Information Theory" 2009 Introduction

23. Holland, J; Holyoak, K; Nisbett, R; Thagard, P; Induction: Processes of Inference, Learning, and Discovery. 1986 MIT, Cambridge p. 5
24. Gazzaniga, M ; Ivry, R ; Mangun, G: Cognitive Neuroscience; The biology of the mind. 2nd Ed. W.W. Norton & Co., 2002 New York  p. 193
25.  Damian G. Stephena, /James A. Dixona, The Self-Organization of Insight, Entropy and Power Laws in Problem Solving; Center for the Ecological Study of Perception and Action; Introduction p1., Haskins Laboratories, University of Connecticut
26.  Craig A. Coppock, Complexity of Recognition; Inductive and Inferential Processes in Forensic Science, 2008 academia.com http://fingerprintindividualization.blogspot.com, ReseachGate.com 
27.  J. A. Scott Kelso: Dynamic Patterns: The Self-Organization of Brain and Behavior.
Brandford Books, Cambridge 1999 p. 38
28.  Craig A. Coppock, Complexity of Recognition; Inductive and Inferential Processes in Forensic Science, 2008 p. 1  http://fingerprintindividualization.blogspot.com, academia.com, ResearchGate.com
29.  Holland, J; Holyoak, K; Nisbett, R; Thagard, P; Induction: Processes of Inference, Learning, and Discovery. MIT, Cambridge 1986 p. 1
30.  Holland, J; Holyoak, K; Nisbett, R; Thagard, P; Induction: Processes of Inference, Learning, and Discovery. MIT, Cambridge 1986 p. 4
31.  Damian G. Stephena, /James A. Dixona, The Self-Organization of Insight: Entropy and Power Laws in Problem Solving / Section 2. Emergent Structure in Problem Solving, Center for the Ecological Study of Perception and Action; Haskins Laboratories, University of Connecticut
32. Andrew Odlyzko and Benjamin Tilly, A refutation of Metcalfe’s Law and a better estimate for the value of networks and network interconnections, Digital Technology Center, University of Minnesota, 2005
33.  Jordana Cepelewicz, To Make Sense of the Present, Brains May Predict the Future, Quanta Magazine, Simons Foundation Pub, Jul 10, 2018
34. Carlo Rovelli, The Journey to Quantum Gravity, 2017, Penguin UK, p. 222
34a. Henri Poincare ́, Science and Method, Dover Publications, 1st ed English Translation, USA p.51 
35. Craig A. Coppock, Complexity of Recognition; Inductive and Inferential Processes in Forensic Science, 2008 p 18.  http://fingerprintindividualization.blogspot.com

36. Stephen Wolfram; A New Kind of Science, Wolfram Media, IL 2002 p. 751

37. Phillip K Poon, Outline of Elements of Information Theory, Definition of a communication channel. 2009 P. 44 pdf

38. Claude E. Shannon, Warren Weaver, The Mathematical Theory of Communication 1949 Illini Books Ed 1963, p.x

39. David P. Feldman, Chaos and Fractals, An Elementary Introduction. Oxford University Press 2012 p.152

40. Phillip K Poon, Outline of Elements of Information Theory, 2009 Definition of a communication channel. P. 9 pdf

41. Stephen Wolfram; A New Kind of Science, Wolfram Media, IL 2002

42. Stephen Wolfram; A New Kind of Science, Wolfram Media, IL 2002, p.316

43. Stephen Wolfram; A New Kind of Science, Wolfram Media, IL 2002, p.556

44. Anil Anathaswam; New Quantum Paradox Clarifies Where Our Views of Reality Go Wrong, Quanta Magazine, Simons Foundation Pub, Dec 3, 2018.