Monday, December 12, 2011

Fractals and Cognitive Processes: Primary Focus on Forensic Comparison Science

Fractals and Cognitive Processes in Forensic Comparison Science

 (A Collection of Investigative Notes)

 

     I would like to offer that there are many non-obvious and yet undiscovered fractal aspects in forensics.  Examples of fractal structure in comparison sciences may include certain surface topography, time aspects, measurement and workflows.

 

An interesting Quote:

“The concept of a fractal structure, which lacks a characteristic length scale, can be extended to the analysis of complex “temporal processes”. However, a challenge in detecting and quantifying self-similar scaling in complex time series is the following: Although time series are usually plotted on a 2-dimensional surface, a time series actually involves two different physical variables. For example, in Figure 1, the horizontal axis represents time,'' while the vertical axis represents the value of the variable that changes over time (in this case, heart rate). These two axes have independent physical units, minutes and beats/minute, respectively. (Even in cases where the two axes of a time series have the same units, their intrinsic physical meaning is still different.) This situation is different from that of geometrical curves (such as coastlines and mountain ranges [and Edgeoscopy]) embedded in a 2-dimensional plane, where both axes represent the same physical variable. To determine if a 2-dimensional curve is self-similar, we can do the following test: (i) take a subset of the object and rescale it to the same size of the original object, using the same magnification factor for both its width and height; and then (ii) compare the statistical properties of the rescaled object with the original object. In contrast, to properly compare a subset of a time series with the original data set, we need two magnification factors (along the horizontal and vertical axes), since these two axes represent different physical variables.” (1)

 

--------

     In reference to “The concept of a fractal structure, in the analysis of complex temporal processes;” I suggest that we also investigate the gross analytical processes within a framework of Information Theory and multidimensional fractal integration, rather than solely on specific micro applications within forensic comparison. Essentially, detecting and quantifying any self-similar scaling in complex time series of analytical, documentation and measurement processes from the Macro to the Micro with consideration for the expected phase transitions of information theory.  Accordingly, we should expect to improve our understanding of the introduction of noise and complexity into the system, as described in a communication system by Claude Shannon, as well as natural fractal aspects described by Benoit Mandlebrot.   A main difficulty is the lack of research in self-inverse and self-affine fractals.  

 

     I also imagine that some common irrational numbers can representing fractal dimensions by proxy, as an island coast is a rough fractal circle, so is a circle circumference described with an irrational number... pi.  Can we make the leap that irrationals can be fractals, and thus describe situations with infinite aspects?  The infinities of fractals and irrational numbers meet the cognitive process with their infinities as with calculus?  The island beach measurements can take on infinities with increasingly smaller scales, yet we must also factor in the constant change that happens to the beach.  Our cognitive processes are also constantly changing with each bit of analysis.   Read on. 

 

     This fractal understanding, combined within a context of Information Theory, will assist us in our quest to better understand and describe the cognitive comparison processes we enroll in our scientific analysis, specifically the steps of comparison science’s ACE-V (The Scientific Method) within a larger more comprehensive and complex framework. 

------

Quantity of Information

     An information set or problem to be analyzed can represent a lot of data, yet how much of that potential information must we know or acquire to solve the problem? How can we solve the problem if we will not know all the information? Our biological (cognitive) neural networks can store incredible amounts of experience-based information and correlations of that information, yet it won’t contain or be able to acquire all the relevant information in a complex problem set. Unique relationships of points can represent a set of possibilities: In our case consider level 2 characteristics. 

 

      For example, 40 characteristics will equal 780 unique level 2 relationships within such a set.

Add inter-related sets, non-specificity and scaling effects, the information’s potential inter-relatedness quickly reaches extremely large numbers. We should expect the possibility that there may be infinite ways to correctly solve many complex cognitive problems, depending on parameters, yet the probabilities of each solution will vary, according to Information Theory.  It's a new way of defining the scope of the forensic [cognitive] problem and the complex process used to effect solutions.  "...Because a model never comprises all aspects of a system, we are always forced to deal with an abstraction. As human beings, we have limited time and resources and, as a result of our subjective perspectives, limited access to the world.  When we model a system and make predictions, our pragmatic approach is to focus on one parameter or a few parameters only and to disregard [or minimize] others." (2)

 

     When solving a nonlinear problem, such as with forensic comparisons, I see there are several baseline factors to consider.

 

a. Not all relationships need to be analyzed. Only a fraction of the total information is sufficient for comprehension of the problem set.  Cognitive processes are known for streamlining information to achieve a task efficiently.  Superfluous information is ignored, demoted within our attention and/or remembered only temporarily.  

 

b. What is the minimum requirement to solve the problem? We can call this “Minimum sufficiency” as I do not know an official term. This is initially accomplished using applied probability-based Intuition as we can’t run hard numbers on a soft set of data with unknown variables.  We do understand that we never use all available information as new information can always be created.  We use what information we need for the task, and we can recognize when we need more information for our knowledgebase or simply to understand.  

 

c. We will never be aware of all possibilities for complex sets. There will always be a level of uncertainty.

 

d. All our judgments and actions must occur within a framework of uncertainty. We learn and problem solve within this shadow.

 

e. Different types of logic may be utilized to successfully solve a problem. 

 

     In a problem set with an estimated 100,000 points of reference, a subject may only be aware of 5k. Of these 5k relationships it may be possible to solve the problem to practical comprehension with only 2k points of reference and cross-reference. Perhaps a review of 598 relationships will get the job done for a more efficient and experienced problem solver. Each will have a different experience base and is expected to apply that experience differently. Thus, those particular references and cross-references are expected to be different in many aspects. There are many variables. Step away from pure numbers and you have interpretation issues such as distortion as a variable. Not just a single issue of distortion but many points of reference may need a separate level of evaluation. 

Is there a repetitive pattern of problem solving at the most fundamental level, thus the process being similar at various Macro to Micro levels, within degrees of complexity eventually transitioning as a phase transition from a practical to a formal process? Is a there a repeating pattern of “minimum sufficiency” within the possibilities of large complex data relationships?  Also see “Undiscovered Information” later in these notes.  

 

-Does an examiner need to compare all available latent data to make draw a hypothesis of individualization?

-Does an examiner need to look at all (or same date) another examiner reviewed to draw a conclusion?

 

a. Each latent print is only a fraction of a whole and a conclusion would most likely be based on a fraction of that information when considering the scaling effects. The larger amount of information available, the less percentage is needed for comprehension of the solution. A latent with 7 level two characteristics has 21 unique relationships. One would expect a full detailed analysis. However, a latent print with 41 level two characteristics has 820 unique relationships. We don’t need to evaluation them all in great detail. As we also don’t need to see the remaining friction skin characteristics that never made the impression in the first place.

 

b. Is any additional evaluation (above and beyond the stated conclusion) simply a verification means to increase probability support for the established hypothesis or are we simply learning more about that impression?

 

c. How do we know when we have “minimum sufficiency” within a framework that always includes uncertainty?  In other words, how much information is sufficient and how does experience relate to this question? 

 

d. Does this cognitive pattern apply to other situations at different scales?

 

e. How fine of measurement do examiners need?  What level of detail do we operated at?  Does it vary?

 

f. Time constraints: Cost, value and efficiency are related to time and time must always be considered.  

 

     I see this pattern as being a ground level truth of how we apply our logic and draw conclusions. We can’t necessarily use all the data, nor can we have it. I see the application of ACE-V simply as a formal and detailed application of our common high-speed approach to recognition. Practical vs. Formal is where we are acutely and scientifically aware of the need to minimize error. Error can be significantly reduced [error correction] with a formal application of logic and applied scientific methodology. However, its probability of uncertainty cannot be eliminated. I ask these questions to help us better understand what we do and how we do it. We can then improve instruction with the potential of further increased accuracy. 

 

     Being a motorsport enthusiast, I frequently think of the analogy of a race car driver vs. new teenage driver making a single lap on racetrack. Should the teen make it around the track at some speed without running off the track or crashing he/she will have achieved “minimum sufficiency” in solving the problem set of; completing a circuit of the track. The job is done. From there you can improve the result details with better numbers and with improved accuracy. Essentially, it’s a ratio of effectiveness to failure. Eventually, you will go professional stepping up to the level of “trained to competency.” Here you may even win time trails at an increased risk of not completing a lap, which is failure. The best ratio may not be at the speeds needed to win races if your goal is accuracy and efficiency in lap completion. Your experience and application of skill is high quality yet, its proper application will depend on the problem to be solved. Formally, your skill may be consistently better than simply “sufficient”. Minimum sufficiency may be for passing the driver’s license test. It gets the job done, yet to a lower level of expertise combined with a higher level for potential error. The insurance companies remind us of this each time we insure our teenagers. 

 

     However, you did not need to be a racecar driver to circumnavigate the track with the vehicle. It was achieved with less than all the available information. There were unknown dynamics and physics at play in the car’s operation and its interface with the driver and track. There is also specific information that needs more attention that other information.  Expertise allows a driver to focus more on the needed information need to do the task very well.  The feat was accomplished in the face of uncertainty with the driver constantly learning ever more. Even at the level of the professional there was uncertainty. In addition, everyone that successfully circumnavigates the track will do so differently. They will utilize their unique experience-based perspective and have a different race line around the track, such as with the concept of non-specificity. Some drivers will be more efficient than others, some distracted with superfluous information, some quicker. The quickest driver may post the best time yet may not offer the most efficient lap.  A crash is an error that fails to solve the problem.  Such a seemingly simple task is rather complex and noisy when analyzed.  The problem (a normal problem) is the continually changing information set; the information set is a lap of the track, which on a graph would be a fractal pattern with ever changing conditions.  Even though a walk down a path is also ‘ever changing conditions’ most people can be consider experts at this task, whereas a racecar driver is a specialization that requires a lot of non-general information compiled as experience and the working knowledge of what information is most important to improve prediction, reaction, and task efficiency.   The racecar driver has also to compete against other similar experts.  The spectator excitement is in these small, specialized details known as the edge of control at speed. 

 

     A partial list of dynamic noise considerations associated with this concept would include:

                  a. Experience as a driver, relevant motor skills

                  b. Experience with the dynamics of the car

                  c.  Fitness and general health

                  d. Comprehension of task and goal and general alertness

                  e. Mechanical aspects of task; Vehicle issues, limitations, and wear

                  f.  Environmental conditions; Variable weather, lighting, obstructions

                  g. Efficiencies; Driver and machine

Ability to parse and leverage the most relevant information for the task while constantly learning

                                    Machine efficiency dynamics in chassis, engine, drivetrain, and driver interface

 

     As with all problem-solving, there is a learning curve in the application of logic for that information set. However, this curve is simply degrees of competence and efficiency in solving the problem. If minimum sufficiency is met, the job can be considered complete. You can always walk the track for an even easier solution, yet there will still be a long list of noise considerations, even if we semi-automated them in our daily life. For a long track the applied automation of a vehicle may be an improvement on specific efficiencies, especially as it relates to time.  It all depends on the problem to be solved and the time frame in which that solution must fit.  Thus, there can be infinite solutions and potential failures to this problem set.  Yet, the problem itself is finite... a successfully lap the track. 

 

     Perhaps we operate within this same framework of practical minimum sufficiency as a lower threshold for our performance similar to a “best evidence rule” using the evidence available. Further investigation may discover better “best evidence” yet, it would be a slightly different problem with this new information. We would prefer to not have to try something over and over (practice) until we find a solution. Simple problems should be easier than that. Most problems in life are indeed simple. So simple we can do them automatically, such as walking down the sidewalk. Falling off the curb is a crash; it is a failure to solve the problem of walking (on the sidewalk) to the next block. Chewing gum and biting your tongue. Failure.  Error correction is needed.  

 

     What about making a hypothesis of individualization as with a fingerprint, face, or iris? Did you use all the information available? A latent fingerprint match with 29 minutiae has 406 level two spatial relationships, plus levels one and three. Did we really compare and analyze all that information? Even with a binary fingerprint image without level three, did we still compare everything? Or did we find some particular level, well above “minimum sufficiency” in our formal comparison process and scan the rest? Perhaps, we were satisfied with a level that got the job done, yet with a good measure of professional confidence that greatly exceeds what would be considered routine, average, or simply minimally sufficient.

 

Latent Fingerprint Searching and Comparison

 

     I'm seeing two approaches to tackling the difficult latent search issue.  One is from our human brain perspective, such as the extended feature set for latent mark-up within a formal forensic comparison process.  The other is from a mathematical approach in finding ways to "encode" mathematically so computers can improve latent searches prior to our verification stage.  This is where a fractal approach may prove more useful in application.  The reason for some of this energetic research effort is due to the relatively low lights out accuracy with current algorithmic models and fractals are rearing their head in more and more research results and efforts.  I don't see the examiner looking at or physically using fractals, but rather the underlying encoding algorithms being fractal based or influenced.  This may provide us with an improved understanding of how fractals are incorporated into the natural investigative process as well as within a formal communication system.  We would still expect see the examiner markup for most verification stages and "yellow resolves" whereas an algorithm statistical analysis requires human assistance for improved accuracy.  

 

C. Coppock

 

1.  Fractal Objects and Self-Similar Processes http://www.physionet.org/tutorials/fmnc/node3.html

2.  Susie Vrobel, Fractal Time: Why a Watched Kettle Never Boils; 2012 World Scientific; Studies of Nonlinear Phenomena in Life Sciences- Vol. 14 p.72

 

Notes Part 2

 

     It can be debated that practical recognition is not a subset of formal individualization and it can be argued that non-specificity (effect of introduced noise) and minimal sufficiency (sufficient information) are not real issues. However, I see many holes in our current logic and it would seem prudent to investigate further these topics in great depth. Perhaps we have worked ourselves into a corner and need a new perspective, and one I would suggest is Information Theory.  I have been trying to step back a bit to get a wider perspective to see what we are really doing and how it integrates into a larger system. Should we always be focused on a tree we may miss the fact we are in a vast forest.  I suggest we can get a better understanding of our science by understanding it within our normal problem-solving processes, leveraging Information Theory and fractal integration. 

 

     Here is a thought on the concept of information itself:  Life's ability to interact with the universe by harnessing and leveraging physics based on a continuous stream of information is like a fractal dimension within space-time itself.  Information in all its data forms are a semi-organized collection of action potential that can be stored in the brain as a unique signature of neuron impulses that when referenced, is done so imperfectly yet with a potential for action(s) on some relevant level.  Complex events and "interpretations" are unique.  Within time, additional information inputs and noise influences and modifies memory and thus the information itself.  Once the memory recall ability is permanently lost, the information too is lost within the ambient noise as randomness and as an increase in entropy.  These non-differentiated nerve impulses are like a unique rock that has degenerated randomly to sand on an infinite beach where outside influences negatively affected the memory or situation that would have facilitated the recall of the memory.  Additional versions of the information may be held by other collectors, yet ultimately would be unique to those sets and subject to the same dynamics.  Information Theory and Entropy keep showing up in the most fundamental analysis of the issue.

 

     Exactly how is an “organized” set of neural connections so different from a random arrangement of neural connections in the same neural net format?  One set represents a useful memory, the other is nothing useful.  From a physics level, how can we categorically state their differences?  

 

 

Tipping-Point of Information Destruction and Undiscovered Information

    Here is a simple gadankenexperiment, similar to the neural connections’ idea above, take small messages converted to binary, then eliminate some of the formatting.  How different are the messages from a random message, that is not compressible; i.e. is random?  How would word frequencies be affected if the format change also included a "non-byte" based message?  What is the tipping point in the destruction of information?  How does this tipping-point reveal structure in cognitive information storage and retrieval?  How can we address the reality of a truly random generated string, that can also overlap with a coded, albeit complex, string, or do we say that the truly random generated string is not really random by definition only, as it can be compressed into some algorithm and then we forget about how it came to be?  Finally, what is the value of undiscovered information as compared to discovered and processed information?  Is undiscovered information a quasi-state of reality, where quantum mechanics reign? 

 

A.     ASCII Message:  "This is a test of a randomness concept." with padding.

Bit Message of same:

01010100 01101000 01101001 01110011 00100000 01101001 01110011 00100000 01100001 00100000 01110100 01100101 01110011 01110100 00100000 01101111 01100110 00100000 01100001 00100000 01110010 01100001 01101110 01100100 01101111 01101101 01101110 01100101 01110011 01110011 00100000 01100011 01101111 01101110 01100011 01100101 01110000 01110100 00101110 

 

The format (of padding spaces) provides critical information in the message as we have a degree of certainty that the message is in byte form. 

 

B.     Message without spaces:  "Thisisatestofarandomnessconcept."

Bit message without padding spaces:

0101010001101000011010010111001100100000011010010111001100100000011000010010000001110100011001010111001101110100001000000110111101100110001000000110000100100000011100100110000111011100110010001101111011011010110111001100101011100110110011001000000110001101101111011011100110001101100101011100000111010000101110

 

C.     (String via a random bit generator) 

10101100000110110111010101001101001110111000110101000111110110011101011011010110100101101101111110000101110011000000101100100110111001111000011101001011110111100011110100101010100010110001010000100111000110011101101001110010000011001101111101010100111001000011001001111111001011011010101010110111000111101100010

 

D.     Bit conversion message of:  "Thisisatestofarandomnessconcept."

0101010001101000011010010111001101101001011100110110000101110100011001010111001101110100011011110110011001100001011100100110000101101110011001000110111101101101011011100110010101110011011100110110001101101111011011100110001101100101011100000111010000101110

 

E.      Bit conversion of word "random" = 011100100110000101101110011001000110111101101101 

             vs. Random Bit Generator = 101111111101110000110111001110100011100101100010

 

F.      Two random bits sequences of 10 with some like quantities.   

0111001101 and 0010110010.  Each has 4 “1” bits and 6 “0” bits.  

 

  Without the contextual format the value of the information is reduced, entropy increases.  Cognitive processes seem to have a rough informal formatting to include complex associations such as the sensory inputs; visual, audible, olfactory, etc. Without a proper context, synaptic based information storage would also seem to degenerate.  Chance has it that each line in F, has the same number of bits (10) and same number of 0 and 1 bits.  In several ways they are the same, if we don’t know what the code is, they are the equivalent of a random set.  We can assign a code, or we can learn or discover a code, but until then, their value is limited, and equivalent to random, like a repetitive coin toss.  

 

   Can a fractal basis for this fragile information processing complexity be the key since the nervous system is already described as a fractal, albeit with some quantum characteristics as well?  If we understand an average of the information an estimate so to speak.  Would a fractal also be ideal for describing, in general, non-linear induction processes to include the scientific method?  That is an incomplete yet more valuable and perhaps leverageable understanding?  Enough to ask another question, then another…

 

   There is a relevant human element when it comes to the tipping point of information destruction; whereas information, or lack of information is equivalent to non-intelligent natural forces.  The point is that, a local measurement is a variably precise invention applied to a local set of data or an average larger set.  Unlike us, nature is indifferent to the concept of information value and states so with probabilities rather than measured facts suitable to satisfy our simple and localized queries.  An additional point of thought here is the energy put into concept of information.  The energy to create the following strings are the same.  

 

Bit conversion of word "random" = 011100100110000101101110011001000110111101101101

             vs. Random Bit Generator = 101111111101110000110111001110100011100101100010

 

Accordingly, we ask the question; what is the value of the information?  It must be a relative value, whereas a small string that embeds information can easily by comparable to a random string even if we understand the probability of such and a long data strings we interpret as random noise could be information undecoded. Furthermore, a single bit or few bits can be an icon, tag or other representation for an infinite amount of other information.  If we do nothing with the information its value, its potential, must be lowered, yet just knowing the information has value as experience and this affects our knowledgebase from which we draw inferences.  We do know that erasure of a bit of information does have an effect on entropy according to Landauer’s principle.  “…fact that the erasure of one bit of information always increases the thermodynamical entropy of the world by kln2.” [The physics of forgetting: Landauer’s erasure principle and information theory M. B. Plenio and V. Vitelli Optics Section, The Blackett Laboratory, Imperial College, London SW7 2BW, UK p1 > https://cds.cern.ch/record/491908/files/0103108.pdf]

 

Variable Infinities

     We have difficulty describing the infinities in our work.  I hear practitioners straining for logic and foundation to adequately describe the target "information set" and the "comparison process" used to analyze that information set.  How does it relate to Uniqueness, Randomness, Complexity, Uncertainty, Power Laws, and Entropy in a cognitive comparison process in which infinite process variability (non-specificity) pertain and as each examiner applies their expertise on the foundation of their randomly acquired experience base?

 

     It occurred to me that (if) these are indeed fractals of "quantity/record of process" in part or in whole, then we could better understand and describe our applied logic in all regards, thus improving the forensic comparison science process.  I have not seen evidence that we have truly discovered the relevant boundary of relevant fractals nor understand their application boundaries.  The fractal sets would be set apart by emergent phase transitions and the sets would evolve due to this emergence and further information processing toward a solution to a problem, or not.

 

Fractals Specifically

We normally think of fractals as complex geometric descriptions, yet there is more...  Here is a mental primer; A fractal is an object (or quantity) that displays self-similarity, in a somewhat technical sense, on all scales. The object need not exhibit exactly the same structure at all scales, but the same "type" of structures must appear on all scales. A plot of the quantity on a log-log graph versus scale then gives a straight line, whose slope is said to be the fractal dimension.  http://mathworld.wolfram.com/Fractal.html

-----------------------

     A mathematical fractal is based on an equation that undergoes iteration; a form of feedback based on recursion. “The term has a variety of meanings specific to a variety of disciplines ranging from linguistics to logic.” http://www.absoluteastronomy.com/topics/Recursion””

(Note:  There is discussion on recursion elsewhere CLPEX.com)

------------------------------------

     The most common application of recursion is in mathematics and computer science, in which it refers to a method of defining functions in which the function being defined is applied within its own definition.

     A fractal often has the following features:

•It has a fine structure at arbitrarily small scales.

•It is too irregular to be easily described in traditional Euclidean geometric language.

•It is self-similar (at least approximately or stochastically).

•It has a simple and recursive definition.

•It has a Hausdorff dimension which is greater than its topological dimension…

   Ref: http://www.crystalinks.com/fractals.html

    Fractals also seem to involve infinities, to include temporal features, and here we see interesting cross-over with geometry and the cognitive process.  Fractals are used again to try to find a pattern in visible chaos. Using a process called "correlated percolation", very accurate representations of city growth can be achieved. The best successes with the fractal city researchers have been Berlin and London, where a very exact mathematical relationship that included exponential equations was able to closely model the actual city growth. The end theory is that central planning has only a limited effect on cities - that people will continue to live where they want to, as if drawn there naturally, fractally.  

  Ref: [Fractal Geometry   https://www.dreamessays.com/customessays/Science/11469.htm]

--------------------

     Fractals appear in the world both as objects and as (time records of processes). Practically every example observed involves what appears to be some element of randomness, perhaps due to the interactions of very many small parts of the process. 

   Ref: [Random Fractals and the Stock Market  http://classes.yale.edu/fractals/RandFrac/welcome.html]

---------------------

What does it mean to us?  How does it relate to Uniqueness, Randomness, Complexity, Uncertainty, and our semi-repetitive process of comparison and infinite process variability as in non-specificity? 

 

----------------

     The concept of a fractal structure, which lacks a characteristic length scale, can be extended to the analysis of complex temporal processes. The Apollonian gasket fractal is an interesting example that seems to feature specific aspects of circular iteration, that could represent the scientific method as applied via Information Theory and perhaps we could modify an Apollonian gasket or similar fractal to represent a natural fractal that is incomplete and fragmented so it would better represent the complexities of cognitive processes.  (See additional blog entry "Scientific Method and Information Theory").  However, a challenge in detecting and quantifying self-similar scaling in complex time series is the following: Although time series are usually plotted on a 2-dimensional surface, a time series actually involves two different physical variables. For example, in Figure 1 (not shown here), the horizontal axis represents "time," while the vertical axis represents the value of the variable that changes over time (in this case, heart rate). These two axes have independent physical units, minutes and beats/minute, respectively. (Even in cases where the two axes of a time series have the same units, their intrinsic physical meaning is still different.) This situation is different from that of geometrical curves (such as coastlines and mountain ranges) embedded in a 2-dimensional plane, where both axes represent the same physical variable. To determine if a 2-dimensional curve is self-similar, we can do the following test: (i) take a subset of the object and rescale it to the same size of the original object, using the same magnification factor for both its width and height; and then (ii) compare the statistical properties of the rescaled object with the original object. In contrast, to properly compare a subset of a time series with the original data set, we need two magnification factors (along the horizontal and vertical axes), since these two axes represent different physical variables." [Fractal Objects and Self-Similar Processeshttp://www.physionet.org/tutorials/fmnc/node3.html]

 

With fractals we must also evaluate the influence of infinity on our processes and on our true understanding of those processes.  The problem with infinity and our attempting to understand it, is that we often toss out reality to make our investigation fit the facts and our potential limited perspectives.  An example is the Hilbert Hotel.  The simple fact that messages transmit finitely transcends the infinite actions of the hotel keeper.  Hence, the very first request for any adjustments to infinity will take an infinite time and the solution halts.  Thus, when we dial ‘reality of process’ back into our infinite investigations, we see our academic ventures are not hypothetical, they are imaginary.  How can we better understand what is important and what are aspects we cannot affect?

We must realize that mathematics can describe a concept and functional pathway but does not necessarily describe reality of our dynamics over time.  An example is the dichotomy of Zeno’s fractions describing distance traveled over time.  We can describe distance, but time will get us because we are not describing reality, but rather a sub-set aspect which we wish to measure, knowing that our chosen measurement system may not be ideal for that application and may not be reasonably representative for the process. 

Self-Affine fractals and our process of investigation, and moving forward in time, building our knowledgebase, does not have sharply a defined boundary, but rather an infinite structure like trying to define a natural boundary like a jagged coastline as stated by Benoit Mandelbrot. This infinite boundary aspect manifests itself as a degree of investigative inquiry regarding aspects related to the whole.  That is, we have the options of ‘digging deeper’ to finer levels of information or backing out for a higher lever overall view as we ask questions based on our new understanding and discovery, or rather our qualitative and quantitative inquiry features performed in an iterative fashion.  

 

----------------------------------

Some researchers do not think cognitive processes follow, can follow, or should follow a specific fractal pattern.  Here is one such dissension; “To assert a fractal logic (the mathematics found in nature's systems) is to place human cognition into a holistic view of the universe. And in doing so it attempts to erode the simulacra of dichotomous philosophical discourses.” [FRACTAL LOGIC The 'Scale' of the Fractal: human cognition, the environment, the universe; http://www.whatrain.com/fractallogic/page35.htm]

 

I have yet seen proof to exclude it and I see fundamental gaps in the understanding of our cognitive processes and surmise we should not automatically self-center our elite cognitive abilities to a special shelf above and beyond simple and common processes.  I think more research is warranted and deserved in our quest for a better understanding of “how we do things” as we already see considerable latent psychology “controlling and patterning” our actions without our consent or knowledge.   I would consider such automation a clue, a clue our cognitive power is not that special after all.   For more details on this topic see article:  Information Theory’s Foundation to the Scientific Method, also on Academia.edu and ResearchGate.com

 

If fractals follow power laws, what if ideas (analytical processes) were measured from important but rare transitional "eureka moments" on down to more common ideas, major relationships, associations, and minor relationships analysis?  Eureka moments can be said to require a lot higher effort and supporting knowledgebase.  Often these efforts that provide such a breakthrough Eureka, take considerable effort and time leveraging both the conscious and subconscious.  

 

The fact that such a complex process as that of melding the scientific method, with fractals, infinity, and information theory, will always prove incomplete should not be the grounds for missing the opportunity to improve error mitigation in scientific endeavors.

 

Craig A. Coppock

September 2011   

Updated: 20231204

 

Draft Published at: https://fingerprintindividualization.blogspot.com and ResearchGate.c

No comments: