LC3ca2 “Soft” Sciences & “Hard” Problems

Soft Sciences / Hard Problems

The hard sciences appear hard precisely because they tackle soft problems. The soft sciences face the harder problems.”
Heinz von Foerster [note 1]

The synergy between hypertext tools for organizing large, heterogeneous
databases and functioning models as explanations of processes may permit
us to address a class of problems remaining largely ignored and undervalued in
the study of human learning. If these “power tools for the mind” permit us to
better manage and model complexity, they may bring within our grasp a series of
problems long considered beyond the reach of well articulated understanding.
One such cluster of problems focusses around questions of cognitive development
of the individual in particular circumstances.

Contrasting Easy and Hard Problems

Research in education is savagely caricatured by Feynman as “cargo-cult
science” (1985). The justice in his criticism derives from our inability to to
achieve these goals: we need to be able to specify

  1. what is in the mind of the student,
  2. how that knowledge is changed by our instruction, and
  3. why some changes endure long after instruction has ceased and others do not.

Such questions are very hard to answer if one asks them about a specific
person. How, precisely, did s/he acquire that particular notion and integrate
it with what else s/he knew at the time. Was the outcome stable over time ? If
so, why ? If not, why not ?

Why Reading Core Dumps was Easy

When a computer system failed twenty five years ago, our primary tool of
diagnosis was a memory print, a dump of the contents of memory and registers at
the time the failure became manifest. We were able to pinpoint the failure and
understand the causes behind it because we had knowledge and information that
we could count on. In the best cases, we had:

  1. a manifest problem situation fairly well defined
  2. complete knowledge of low level mechanism functions (machine architecture and
    order code)

  3. complete knowledge of intermediate level functions (compiler processes and
    output organization)

  4. well defined high level functions (application purposes and coding)

Reflecting on what made such efforts “easy”, we should focus on three themes:
context, the use of detail, and multi-thread analysis.
Note first about context that the problem manifestation itself defined the
specific situation requiring interpretation. Second, the specific details
trapped in the core dump both illuminated the system’s functioning and guided
the process of analysis and interpretation through a sea of information.
Finally, multi-level analysis (on machine, compiler function, and application
levels) was both a possibility and a common occurrence because even in the
best of cases the core dump would be imperfect (for example, data in a record
which caused a specific branch might be overlaid before the system failure
became manifest in some later used routines). What was true in reading core
dumps is also true of machine learning programs. We can in principle, and
often in fact with the required effort, understand completely and precisely how
problems are solved and learning occurs.

Why Studying Human Thinking and Learning is Hard

The elements that make human case studies different and harder than
reading core dumps include the following.

Context: First, the “problem” or situation to be explained needs to be
defined. We can not easily and selectively capture information when a
significant event occurs. (We can try very hard to do so. Exploiting the
subject ‘s participation via introspection is one way we try to approach this
effort.)

Use of detail: The context is a psychological situation more than a
bounded domain. Because of the wide range of human perception and memory, one
can’t be certain that all critical details have been captured or could ever be
captured even in principle. The common attempt to reach general conclusions
diffuses focus even more from the detail which guide complex analyses.

Multi-thread analysis: One knows neither the human “application” nor
the “machine” very well; it is not even certain how many different and
significant levels of function are involved in any incident of problem solving
or learning. Multi-thread analysis is harder because the context is
surreal.

If these difficulties of case analysis can be seen less as reasons for
despair and more as guideposts for methodological development, we can derive
the following objectives and implications from them.

Context: If we strive to understand human learning, we should have as a
defining objective the attempt to isolate circumstances of learning and relate
them to ascribed cognitive structures and their changes; the implication for
analysis is that we should seek situations with a clear saltation in
performance in a well defined interval, then examine minutely all potentially
relevant, available data within that interval.

Use of detail: we all live in hallucinations of our own construction;
including the subject as a participant researcher, using the subject’s
introspection will provide detail about the process of thought and their
interrelations obtainable no other way. However non-objective such material
may be, dealing with it and evaluating it is preferable to ignoring available
information about an important dimension of human thought and learning.

Multi-thread analysis: We should try to use maximally all knowledge
available about the subject of study; working with children you know well may
be one of the best beginnings. Exploiting the range of a rich variety of
information should be preferred to establishing simple correspondences found by
ignoring the complexity of the subject. Trying to capture as much information
as might plausibly be expected to be relevant, according to the light of
theoretical interests and your guiding principles is the method of choice.
With extensive corpora created in such a fashion, using the best available
tools to manage the mass of information is essential.

An Example of Such a Case Study Corpus

Since significant learning appears from processes which are extended in
time, its understanding depends upon a multitude of interactions between what
is in the individual’s mind and the accidents of everyday experience. This
stance has led me to study and record the cognitive development of one of my
daughter’s from the time she was 18 weeks old through the sixth year of her
life. The targeted theme of this study is the interrelationship, if any,
between the development of language skills and knowledge and spatial knowledge.
Technology has enhanced the dependability of case study corpora because
videotapes permits capturing enough of the context to permit later, detailed
interpretation. Every week we have videotaped experiments and our play
together; we supplemented those mechanical records with extensive naturalistic
observation. The total number of tapes comprising the corpus is 240 (each
contains, typically, three experimental sessions). For the first three years,
the experiments divide into sets with two different foci. The first is a
continuing series about Peggy’s developing object knowledge; this material
relates to literature of the Piagetian paradigm and is intended as a
calibrating spine of the study. The second set of experiments is more a
miscellany, each one drawing its inspiration from what my wife or I could
notice as most pregnant in the child’s behavior. Some incidents of the
naturalistic observations are striking in themselves, such as the child’s
climbing up to a tea table — when she had not yet walked — and pushing it
across the floor, walking behind it. Other observations were driven by
quasi-regular reflection, and they tend to focus around my theoretical
concerns, such as the interplay of language production and other dimensions of
development.

Using Hypertext to Cope with an Extensive Case Study Corpus

The information captured in so rich a medium as videotape is beyond all
hope of transcribing completely in any serial symbolic form, such as text based
protocols. Any theory which initially selects the material to be transcribed
must be a preliminary, imperfect theory — but its selection criteria will
screen out possibly critical information. We can begin, however, with partial
transcriptions and use the file updating capability of computer based storage
to extend the transcribed corpus at need. Call this strategy variable depth
transcription
. The researcher records what he imagines as relevant, with
such pointers to source material as to make its deepening at need a matter of
course. As his analysis leads to improved theory, that theory will suggest the
need for deeper analysis of parts of the corpus and their more extended
transcription. The extended database will then suggest enhancements of the
theory. A positive feedback loop is possible. Hypertext facilities now
existing and under development permit such an approach. They need to be
applied to two problems: recording important details and their interconnections
in on line databases; and developing functioning models of cognitive structures
and their changes, based on the empirical material of the corpus. These are
the objectives of the CASE project.

Progress to date with the CASE (Case Analysis Support Environment) project has
been extensive but limited in kind. The effort has focused on establishing
the overall structure into which the case material will be fit over time.
Significant segments of the corpus of naturalistic observations have been
entered into the online database. A beginning has been made in the analysis of
videotape materials, but only at the top level of observation. The current
phase is best be described as corpus administration. It is becoming clear that
the effort will go forward in three waves which, although they will overlap,
will follow this natural sequence. Corpus administration, corpus exploration,
theory construction. The primary feedback loop ultimately will range between
theory construction and corpus exploration, but before that can begin there
must be a critical mass of material under review and at least partially online.
Achieving that critical mass is the heart of the current effort.

The Psychology of the Particular

Many social scientists stand in awe of general theories. They typically
seek an abstract correspondence which will generally permit predictions that
will cover many of the specific events that interest them. For me, the primary
value of a general theory is more down to earth, more like what an engineer
needs; it is the aid a theory offers in understanding and solving particular
problems, such as what enabled a specific person to learn some particular
knowledge in a given context. Why are case studies focused on a single
person, worth paying attention to ? I believe these methods and objectives
will help us approach a new way of doing psychology.

Kurt Lewin argued (1935) that psychology is now an Aristotelian science
and will become a modern or Galilean science only when researchers
shift their focus from finding cross classificatory correspondences to
developing explicit explanations for series of events in concrete cases. In
short, human psychology will become a science only when it begins solving
problems in concrete cases, as one does in reading computer memory dumps or
exploring machine learning. Lewin’s specific proposals failed to engender such
a transformation (see chapter 2 in Langer, 1967), yet there remains the sense
that his attempt was profoundly right — to move studies of mind from seeking
correspondences to solving important problems in very specific and concrete
cases.

The New Opportunity

If we can construct what Lewin refers to as “the pure case” (a corpus
with a sufficiency of information to explain adequately all questions on which
it might bear) and extend the modeling successes of function-oriented
psychology, this should impact both theory formation and how one teaches
psychology. The CASE project is one experiment in this spirit. We are trying to:

    – capture a detailed body of information
    – convert that corpus to an on-line database via variable depth transcription

    – link related events and model development within the corpus

    – offer that linked database and access to the corpus materials to the scrutiny
    and further development by colleagues in order to enhance

    * development of alternative theories

    * application of our own theories to other cases/corpora.

This method will also enhance the acceptability of the case study method by
discriminating between the idiographic focus of the content of case
studies and idiosyncratic interpretations of such studies. Such
facilities will provide a kind of experimental workbench for students where
they may undertake, as it were, a kind of apprenticeship in case study analysis
under the tutelage of the case database developer.

Some may want to argue that such efforts are not Scientific in the sense of
permitting replicable experiments in other circumstances, but the effort is
scientific in Peirce’s broader sense — an attempt to approach some imperfectly
understood but well defined reality through seeking the convergence of opinion
based on serious and extended inquiry. This is enough for me.

There is no magic in either cognitive modeling or the use of on-line tools for
managing data, but their synergy will permit us to address and solve some
long-standing, important problems in cognitive psychology. It is the
problems which give the tools their importance. It is the new tools which give
us some hope of coping with the problems by sharing our information, analyses,
and ideas.

References:
R. Feynman. Comments in Surely You’re Joking Mr. Feynman. (1985.)
K. Lewin. Aristotelian and and Galilean Modes of Thought in Contemporary Psychology.
In A Dynamic Theory of Personality: Selected Papers of Kurt Lewin, McGraw Hill, 1935.
S. Langer. Idols of the Laboratory. Chapter 2 in Mind: An Essay on Human Feeling., (Vol.1)
John Hopkins Press, 1967.
R. W. Lawler. CASE: a Case Analysis Support Environment. Concept paper for a proposal funded
through a National Research Council associateship. An early, unpublished version of this text was included
among position papers for the 1987 Hypertext workshop at Chapel Hill, N. C. The final version of the paper
appears in Hypertext: The State of the Art, MacAleese and Green, 1989, Intellect Press, Ltd.
C. S. Peirce. The Fixation of Belief. In Chance, Love, and Logic (M. R. Cohen, ed.). Harcourt, Brace, and Co. 1923.
Lessons from the History of Science. In Essays in the Philosophy of Science (V. Tomas, Ed.). Liberal Arts Press, 1957.
Publication notes:

  • Written in 1988. Unpublished.

    Text notes:
    [1]. According to Warren McCulloch, Von Foerster was founder of the first Artificial Intelligence Laboratory
    (at the University of Illinois, in Urbana).

    Print Friendly, PDF & Email