Co-adaptation and
The Development of Cognitive Structures


Things developed for one purpose often can be used for something else. Systems, individuals, and even their component parts evolved under one set of environmental pressures may function well and with significantly different impact in changed circumstances. The general name for this circumstance is the coadaptation of structures [2]. The idea of coadaptation is extremely useful in explaining saltations in performance. I aim to produce a more articulate description of the role of coadaptation in the development of structures for thinking.

Within a function-oriented structuralist view of human learning, a central challenge is explaining the transition from from naivete to mastery. This is likewise a major issue for machine learning. We report here progress on that theme with programming experiments taking guidance from a human case study [3]. The domain is tictactoe (or noughts and crosses). The human case serves as the developmental prototype; it answers the question “why this way?” The machine case serves as an experimental laboratory for asking “how hard or simple might the development be?” The overall strategy has been to start with quite limited programs, reflecting specific important characteristics of immature thought, and have them get smarter by escaping from their original limitations. The performance objectives are to develop programs that will achieve primitive forms of abstraction, create internal reflections of external objects and processes, and learn without instruction.

From Anterior Structures to Mature Performance
Piaget’s “conservation” experiments are strong evidence that knowledge in the naive mind leads to reasoning surprisingly different from that in expert minds [4]. Such studies lead us to focus on the issues of what are the precursors of and the processes leading to mature performances. I have argued that in the human case mature skills can arise from small but significant changes in the organization of pre-existing, fragmentary bodies of common-sense knowledge [5] which represent the things of everyday experience and operations on them. If only one could specify the character and function of antecedent structures, he could explain large scale behavior changes as saltations emergent from minimal internal organizational changes.

The Neophyte: particularity and egocentricity
Children’s early cognition is usually described as “concrete”, a term which has two significant dimensions of meaning. The broader meaning is that the child’s knowledge is based upon personal experience. It is in this sense that concrete knowledge is very particular, that is, depending on the specific details of the learner’s interaction with people and things. Lawler’s subject was observed beginning to play tictactoe strategically by imitating a three move plan for establishing a fork another child performed. The characteristics of her knowledge at that time were particularity and egocentricity. Particularity: when her sole plan was blocked, she was unable to develop any alternative [6]. Egocentricity: she did not attend to the moves of her opponent unless they directly interfered with her single plan. She was committed to her own objectives and unconcerned to the point of indifference about the plans of her antagonist [7]. In the setting of a competitive game, this was bound to change. But how, if a mind constructs itself from such beginnings, is it possible to escape the particularity and egocentrcity characteristic of early experiences? The journey from neophyte to master is a long one. One hope of the human study was tracing the path of such development. One objective of the machine study is constructing such a path.

Representation of Knowledge
The representation used to model Lawler’s subject’s naive knowledge, preented in detail in “Learning Strategies through Interaction” by Lawler and Selfridge, 1985, has the parts necessary for adaptive functioning. Learning what to do is essential: GOALS are explicitly represented. Knowing how to achieve a goal is essential: ACTION PLANS are explicitly represented. Knowing when a planned action will work and when it won’t is essential: CONSTRAINTS limiting application of actions are represented explicitly. The structure composed of this triad, a GAC (Goal, Action, Constraints), is our representation of a strategy for achieving a fork in tictactoe. Goals are considered as a three element set of the learner’s marks which take part in a fork. This is the first element of a strategy. Plans of three step length, which add the order of achieving goal steps, are represented as lists. Constraints on plans are two element sublists, the first element being the step of the plan to which the constraint attaches and the second being the set of cell numbers of the opponent’s moves which defeated the plan in a previous game. In our simulations, REO (a relatively expert opponent) can win, block, and apply various rules of cell choice — though ignorant of any strategies of the sort IT is learning. Within the execution of our simulation, the structure of GAC 1 below will lead to the three games shown depending on the opponent’s moves (letters are for IT’s moves, numbers for REO’s):

   GAC 1            GOAL           ACTION        CONSTRAINT
                  {1 3 9}         [1 9 3]       <[3 {2 5 8}]>
win by plan      plan defeat    constrained     cell numbers
                                   draw         
 A | 3 | C       A |   | C       A | C | 3       1 | 2 | 3 
   | 1 | D       2 | 1 | 3       4 | 1 | E       4 | 5 | 6 
 2 |   | B         |   | B       D | 2 | B       7 | 8 | 9 



The representations and learning mechanisms are committed to cell-specificity; they are also self-centered, focussing on the learner’s own plans and knowledge (as they must since, by principle, IT begins not knowing what the opponent will do; IT does not have the ability to model or predict an opponent’s moves in any abstract way) [8]. The result of learning simulations is a descent network which specifies all the goals and plans learned as modifications of the generating precursors of each. The Structure of IT and how it fits in it’s virtual universe are sketched below in Figure 1.


Figure 1

NOTES: Solid lines represent invocation; dashes show control return. GEN represents the possibility of various experiences, and is thus part of the world rather than an experimental tool. REO is a “reasonably expert opponent.” THINGS are external tokens perceptible by both REO and IT; NOTIONS are things of the internal world available to IT for both playing and learning.

Escaping from Particularity
If we ask where symmetry comes from in a world of highly particular descriptions, the answer MUST involve abstraction, but which form of those kinds possible? Abstraction by feature- based classification is the most commonly recognized form, but there are others. Piaget emphasizes a kind of abstraction, focussing more on what one does rather than on what one attributes to external things as a quality. This reflexive abstraction is a functional analysis of the genesis of some knowledge [9], as presented in Bourbaki’s description of the generality of axiomatic systems:

“A mathematician who tries to carry out a proof thinks of a well-defined mathematical object, which he is studying just at this moment. If he now believes that he has found a proof, he notices then, as he carefully examines all the sequences of inference, that only very few of the special properties in the object at issue have really played any significant role in the proof. It is consequently possible to carry out the same proof also for other objects possessing only those properties which had to be used. Here lies the simple idea of the axiomatic method: instead of explaining which objects should be examined, one has to specify only the properties of the objects which are to be used. These properties are placed as axioms at the start. It is no longer necessary to explain what the objects that should be studied really are….”
N. Bourbaki, in Fang, p. 69.

Robust data argue that well articulated, reflexive forms of thought are less accessible to children than adults. The possibility that mature, reflexive abstraction is unavailable to naive minds raises this theoretical question: what process of functional abstraction precedes such fully articulated reflexive abstraction; could such a precursor be the kernel from which such a mature form of functional abstraction may grow?

The Multi-modal Mind
Let us discriminate among the major components of the sensori-motor system and their cognitive descendents, even while assuming the preeminence of that system as the basis of mind. Imagine the entire sensori-motor system of the body as made up of a few large, related, but distinct sub-systems, each characterized by the special states and motions of the major body parts, thus:

Body PartsS-M SubsystemMajor Operations
TrunkSomaticBeing here
LegsLocomotiveMoving from here to there
Head-eyesCapital/visualLooking at that there
Arms-handsManipulativeChanging that there
Tongue/earsLinguisticSaying/hearing whatever


We will assume the representations of mind remain prof
oundly affected by the modality of interactions with experience through which it was developed. One implication is that the representations built through experience will involve different objects and relations, among themselves and with externals of the world, which will depend upon the particular mode of experience. Even if the atomic units of description (e.g. condition action rules) are shared between modes, the entities which are the salient objects of concern and action are different; and in relation to each other only through learned correspondences. This general description of mind contrasts with the more uniformitarian visions which dominate psychology today. These major modal groupings of information structures are imagined to be populated with clusters of related cognitive structures, called “microviews”, with two distinct characters. Some are “task-based” and developed through prior experiences with the external world; others, with a primary character of controlling elements, develop from the relationships and interactions of these disparate, internal microviews. The issue of cognitive development is cast into a framework of developing control structure within a system of originally competing microviews [10].

Redescriptive Abstraction
I propose that the multi-modal structure of the human mind permits development of a significant precursor to reflexive abstraction. The interaction of different modes of the mind in processes of explaining unanticipated outcomes of behavior can alter the operational interpretation and solution of a problem. Eventually, a change of balance can effectively substitute an alternative representation for the original; this could occur if the alternative representation is the more effective in formulating and coping with the encountered problem. In terms of the domain of our explorations and our representations, there is no escape from the particularity of the GAC representation unless some other description is engaged. A description of the same circumstance, rooted in a different mode of experience, would surely have both enough commonality and difference to provide an alternative, applicable description. I identify the GAC absolute grid as one capturing important characteristics of the visual mode [11]; other descriptions based on the somatic or locomotive subsystems of mind could provide alternative descriptions which would by their very nature permit escape from the particularity of the former.

Why should explanation be involved? Peirce argues that “doubt is the motor of thought” and that mental activity ceases when no unanswered questions remain [12]. Circumstances requiring explanation typically involve surprises; the immediate implication is that the result was neither intuitively obvious nor were there adequate processes of inference available beforehand to predict the outcome (at least none such were invoked).

We propose that a different set of functional descriptions, in another modal system, can provide explanation for a set of structures controlling ongoing activity. The initial purpose served by alternative representations is explanation. Symmetry, however, is a salient characteristic of body centered descriptions; this is the basis of their explanatory power when applied where other descriptions are inadequate. Going beyond explanation, when such an alternative description is applied to circumvent frustrations encountered in play, one will have the alternate structure applied with an emergent purpose. Through such a sequence of events, the interaction of multiple representations permits a concrete form of abstraction to develop, an abstraction emergent from the application of alternative descriptions. In the following scenario, I will trace the interaction of different modes of mind as an example of how this early form of functional abstraction, a possible precursor to any consciously articulated reflexive abstraction because it involves “external interpretation” more than reflexive analysis, permits breaking out of the original description’s concreteness with its limitations of particularity. To do so, I need to establish the basic kinds of alternative descriptions to be involved.

Alternative Descriptions in Tictactoe
I begin with the assumptions that the GAC formulation is primarily visual in character and that one should seek familiar schemes for representating things, relations, and actions that are from a different mode of experience. Descriptions based on activity lead to the somatic and locomotive body-part systems as the two obvious, primary candidates. I offer two suggestions for concretizing this search: let’s consider first an “imaginary body-projection” onto the tictactoe grid as the somatic candidate description; and second, an “imaginary walk” through the tictactoe grid as the locomotive candidate description [13]. How would this work in practice?

Somatic Symmetries
Let’s consider two essentially different types of symmetry for the tictactoe grid. Flipping symmetry will name the relation between a pair of forks (or more complex structures) when they are congruent after the grid is rotated around some axis lying in the plane of the grid. Examples of symmetrical forks might be {139} and {179} [14]. An example of an explanation for this fork symmetry based upon an alternative, somatic descrtiption would be the following:

If I sat in the center of the grid and lay down with my head in cell 1 and my feet in cell 9, then cell 3 would be at my left hand. The forks {139} and {179} are the same in the way that my right and left hands are the same, for cell 7 would be at my right hand.

Such an explanation focusses on symmetry with respect to the body axis. A similar argument can be made for plan symmetry in the common fork {137} achieved by two different plans [1 3 7] and [1 7 3].

If I sit in the center of the grid and lie down with my head in cell 1, the cell 3 is at my left hand and cell 7 at my right. If the plan is to move first at the head, next at the left hand, then at the right [1 3 7] then the other plan is the same to the same extent that it doesn’t matter if I lie there with my face up or my face down.

It is harder to argue that such flipping forms of description are as natural for symmetries such as those of forks {139} and {137} because the axis of symmetry lies where no ego-owned markers are placed (along the cells {258}) and because other body parts have to be invoked as placeholders, as in the following:

If I sat in the center of the grid, with my head going up between cells 1 and 3, my shoulders would be there at 1 and 3 and the other parts of the forks would be the same as are my right and left hands.

As this elaboration departs from the explanatory simplicity of the former, one should consider contrasting another model, and thus turn to explanations based on walking around.

Locomotive Symmetry
In contrast with the last explanation which placed a body axis along a line of empty cells, the locomotive symmetries involve moving from one ego-occupied cell to another. Consider now the type of locomotive description that could be used to explain the equivalence of these same forks {139} and {137} [15].

Suppose I start at cell 1, walk to cell 9, then turn and walk to cell 3. Facing center in place leaves me with occupied cells at my right and left hands. For the fork {137}, if I stood at cell 1, I would also have other occupied cells at my right and left hand. The forks are the same if nothing is changed by my jumping from one corner to the next and swinging around to the center.

This Jump-and-Swing model of symmetry does more than explain a surprising win; the outcome is creative, as can be seen in the following scenario where it enables breaking out of the particularity of the GAC representation.

SCENARIO 1: From one corner to another:
After describing different types of symmetries, and justifying their activitation to explain surprising serendipitous victories in play, we now ask whether they can have more than explanatory value. The conclusion is that the “flipping symmetries” do not generate novelties through interactions in this model even though they are natural explanations of surprises. The rotational or jump-and-swing symmetries can do so, however, through the kind of tortuous but feasible path presented in the following scenario.

Generating a Second Descent Network
Let’s supose the IT plays with minimal look ahead. Remember also that IT knows nothing of opening advantage. IT has played successfully to victories even when the second and third step of its known plans were foiled, but never so when the first step was blocked. Supose now that REO begins a game with a move to cell one. All of the existing plans in IT’s repertoire are useless. But IT knows that the GOAL {137} is the same as {139} by rotational symmetry, therefore it can try to generate the alternative plan for that symmetrical goal. The attempt to create and use the plan, based on “jumping” from the pivot of cell 3 to a new pivot at cell 1, will fail on a later move, but IT doesn’t know that [16].

That game establishes the plan [7 3 1] in IT’s repertoire. When IT once again has the first move, should it choose to begin a game in cell 7, it has a decent chance of winning either the game [7 5 3 1 9 …] or [7 5 3 9 1 …]. Such a victory will establish a new prototypical game, comparable in status to [1 9 3] from which a second descent network can be derived. This does NOT argue that such a second descent network will actually be developed in all its fullness (though it may). What it DOES show is one plausible scenario for how the incredibly particular descriptions of GACs can break away from one element of their fixity — commitment to opening in cell 1. The alternative description has served as a bridge to permit developing a second set of equally particular goals and plans.

Emergent Abstraction

If alternative representations can serve as explanation for surprises developed through play, and if they can serve as a bridge to break away from the rigid formulation of the GAC representation, it is not impossible to believe they may begin to provide dynamic guidance as well — exactly of the sort found useful by adults in their play. When this occurs, the alternative description, useful initially as an explanation for the more particular system of primary experiences, will become the dominant system for play. Then the symmetry implicit in the body-centric imagery will become a salient characteristic of the player’s thinking about tictactoe as the highly specific formulations of early experience recede into the background. Abstraction has taken place — because the descriptions of the body mode are implicitly less absolute in respect of space than are those supposed to operate with the GAC representation. But the abstraction is not by features, nor is it by the articulate analysis of reflexive abstraction, as described by Bourbaki. This is an emergent abstraction via REDESCRIPTION, a new kind of functional abstraction. Redescriptive abstraction is a primary example of the coadaptive development of cognitive structures. As a kind of functional abstraction which does not yet require reflexive analysis of actions taken within the same mode of representation, but merely the interpretation of actions in one mode in terms of possible, familiar actions in another mode [17], it needs bear less of an inferential burden than would the more analytic reflexive abstraction described by Bourbaki.

Redescriptive Abstraction and Analogy
One might say that emergent abstraction via redescription is “merely analogy”. I propose an antithetical view: emergent abstraction explains why analogy is so natural and so important in human cognition. Redescriptive abstraction is a primary operation of the multi-modal mind; it is the way we must think to explain surprises to ourselves. We judge analogy and metaphor important because redescriptive abstraction is subsumed under those names.
Further, I speculate it is THE essential general developmental mechanism. This process can be the bootstrap for ego-centric cognitive development because accomplished without reference to moves or actions of the other agent of play.

Escaping from Egocentricity

“…The internalization of socially rooted and historically developed activities is the distinguishing feature of human psychology, the basis of the qualitative leap from animal to human psychology. As yet, the barest outline of this process is known….”
L. S. Vygotsky

If the higher psychological processes to which Vygotsky refers are characteristic of productive intelligence in all forms, the issues of the progressive development of self-control and the internalization of exterior agents and context are profound transformations which need to be understood in both natural and artificial intelligence. The general objective of this section is to describe how it is possible for an egocentric system to transcend its limited focus. The central idea is that the system will adapt to an environmental change because of an insistent purpose; it will do so by interpreting the actions of its antagonist in terms of its own possibilities of play. Two essential milestones on the path of intelligent behavior in interactive circumstances are first, simulation of the activity of an opponent, and second, the internalization of some control elements from the context of play.

In the human case, learning sometimes goes forward by homely binding, an instruction by people or things in what this or that means or how it works. Another kind of learning, which I call “lonely discovery,” is the consequence of commitment to continuation of an interaction, despites the loss of the external partner. Such a desire, which can definitionally permit only vicarious satisfaction, is the motor of that internalization of “the world and the other” which is the quintessence of higher psychological processes [18]. We use the case study experiences in respect of these issues to guide the development of two examples/scenarios of how a machine can confront such challenges. We will consider how a system can develop through interaction in such a way that when the environment becomes impoverished, the system can begin to function more richly, and therefore become generally more capable. The particular problems through which I will approach these issues are the inception of multi-role play (one player as both protagonist and antagonist) and the inception of guarded (or mental) play. I do not want to impute to IT the motive of understanding the play of an opponent to whom it initially pays little attention. Therefore, we grant the system an initial purpose of continuing play even under such limitations as to amount to a crippling of the environment. From this initial purpose emerges another, that of the proper understanding of an antogonist’s game. A major side effect of the solution I propose to this problem is creativity, in the specific sense of enabling the discovery of strategies of play not known beforehand nor learned by another’s instruction. The ultimate achievement of such developmental mechanisms as I propose here is to learn new strategies through analysis of games played by others, i.e. learning by observation.

SCENARIO 2: The Beginning of Multi-role play:
The Human Case
After many sessions of her playing tictactoe with me, in one experiment I asked the subject to play against her brother so that I might better observe her play with another person. She surprised her brother by her significant progress at play (she beat him honestly and knew she would do so in specific games). When I was called away to answer a knock at the door, I asked the children not to play any more games together until I returned. Coming back, I found the game below on the chalk board. When I asked if she had let herself win, she explained that she had been ‘making smart moves for me and the other guy.’

 A | 3 | C 
| 1 | D
2 | | B


My formulation of this episode is as follows. She wanted to continue playing tictactoe. Her ability to do so was hindered by my specific prohibition: the normal environment was crippled. She adapted her earlier developed skills, partitioning them so that strategic play remained her prerogative while tactical play was assigned to her newly effective internal antagonist, ‘the other guy’. Could such a process be made effective in a machine ?

The Form of the Solution for Machines
If the deprivation of interaction in the social milieu is one motor of human cognitive development, within the world of machine intelligence the corresponding circumstance would be the crippling of some function of other programmed modules of the system. The desired consequence of this crippling should be one where continuing in the well worn path is an easily detectable, losing manoeuver, thus necessitating changes in the functions of existing structures. Further, there should exist some alternative which is the marginally different application of an already existing structure capable of providing a functional solution to the problem which the “social” vagary creates. This paper offers two examples of such challenges and possible outcomes in the reorganization of this system of game simulation functions.

The deprivation of interaction leads to the introjection of “the other” within the “self” through the assignment of one of alternative functions (strategic play) to the “ego” (IT) and another (tactical play) to the “alter-ego” (let’s call this agent REO-sim). What forces this reassignment is crippling the environment so that a decision needs to be taken on an issue which was immanent in but transactionally insignificant in the interactive context [19]. What makes this introjection possible is the successful application of established structures for a new function. Obviously, not every attempt to apply an old structure for a new function would be successful [20]; consequently, the character of structures which permits such successful re-application, their functional lability, needs to be established through some sort of experience, either of actual or imaginative interaction. In a system within which such imaginative experience is not yet possible, actual interchanges are needed.

The question raised by simulations was how extensive would be changes required to permit the system of programs to mimic the kind of behavior Lawler’s subject showed in this incident. For IT, the situation equivalent to having no opponent is: whenever IT returns its latest move, IT receives control again with no move made by REO. There are three possible responses to this situation:

  1. IT could make its next planned move (without even noticing something novel had happened); the consequence of continuing to play with no responses from the antagonist is a sort of rehearsal of IT’s plan.
  2. IT could respond making moves for the antagonist but do so in an imperfectly discriminated manner (for example, using the moves of its own plan for both its own moves and those of REO-sim); when IT attempts to assign moves without making a strategy/tactic division of moves, the play appears random, but is best characterized as confused tactical play by both (that is, IT’s first move for REO-sim blocks IT’s own plan after which both agents play tactically.).
  3. IT could partition its own capabilities so that IT alone made strategic moves and REO-sim made tactical moves; when IT’s own internal structure is respected in the allocation of roles, play procedes in the normal fashion. This is the articulation of complementary roles.

I have programmed IT to function in each of these three different manners under control of global switches. The question remains of how one should view the transition from the states of rehearsing, to confusion, to articulated multi-role play [21].

For the transition from one mode of response to another I offer no general, theoretical justification. There are reasons. Very little change was required to the original code because of the modular separation of strategic and tactical play. This is an important observation if and only if the modularity of the code for tactical and strategic play is justified by psychological data or epistemological argument.

The assumption of the modularity of cognitive structures and IT’s pervasive use of modularity is based on the empirical witness of Lawler’s case study. If the human mind is organized as that study suggests, then it should be easy for the kinds of developments described here to occur. Further, if the transition is representable by no more than the insertion of a control element, choosing between formerly competing or serialized subfunctions; and if the transition is driven by events in the environment upsetting ongoing proccesses which “want” to continue, the only “theory” possible is one about the characteristics of structure which permit this adaptivity. My structural assertion in this context is that the coadaptation of disparate cognitive structures is the key element of mind enabling the “internalization” of external agents and objects [22].

SCENARIO 3: The Beginning of Guarded play:
The Human Case
When she was already quite adept at playing Tictactoe against an internalized opponent, Lawler’s subject was confronted with a new challenge: given the first two moves of a game, to tell whether she could certainly win, might possibly win, or would certainly lose. When she was refused her request for materials on which to represent possible games graphically, she proceeded to play out mentally sequences of moves which led to determinate games. This is the quintessence of mental play.

In this example, as in the former, constraints upon interaction with the external world — in a framework committed to continuing the activity — led to the application of existing structures to the satisfaction of new ends [23]; the ends are new in the specific sense that knowledge and know-how developed for playing games against an opponent, worked out with graphical tokens, were applied to answer speculative questions about the possible outcomes of games worked out in the mind. This functional lability of structure is the key to adaptive behavior and thus to learning.

The Machine Case
In the inception of multi-role play, the prohibition of the antagonist role was the stimulus for the reorganization of functioning knowledge. In the machine case, this was achieved through a “crippling” of the output function of the opponent, REO. The next extension asks what function should be crippled to impel the development of guarded play.

Tree generation within the module GEN is the primary function which creates all the possibilities of play; thus it is the candidate program from whose internalization mental play might emerge. GEN contains a mixture of interrelated LOOP macros and recursive invocations. Note, however, that these programs were created as experimental tools, as mechanisms to explore the learning of IT through experiencing particular games. Consequently, the mechanisms have no grounded epistemic status; their functions need be replicated but their mechanisms may be replaced freely by some alternative if that seems more natural.

Because IT does not contain any such tree-generation modules, rebuilding the GEN module structures within IT would require creating such structure from nothing. Because subfunction invocation with arguments is the primary mechanism within IT for transferring control, an invocation oriented solution is the preferred one: this is doing something already given within the module.
The essential insight IT needs for an invocation solution is that if it can be called with an argument by GEN, it can call itself successively with a series of arguments drawn from a list [24].

The remaining issue is how the outcomes of these generated executions of games are handled; that is, the record keeping function is affected as well as the tree generation function. Two alternatives appear to be first, the (unjustified) rebuilding within IT of the list-manipulation aspects of record keeping, or second, the acceptance of an imperfect result in the following specific sense. If the aim of the game is to win, the desired outcome of play is a specific string of cell numbers which comprise a valid win for the first player. If such a single game is the result of the recursive internalization of the GEN module’s tree-generation function, the result is an impoverished one (as compared to a list of all possible outcomes) but nonetheless one that will serve an everyday function of winning a game [25].

Conclusions 1
The immediate cause for internalizing some exterior function is a constriction of the surrounding context. Given the objective of continuing activity despite this constriction, a person or a programming module can proceed by simulating the crippled functions of the environment with components of its own function. The functional lability of existing structures in response to a changed external circumstance is the key to internalization of exterior agents and context elements. In the very simple cases presented here, a machine learning system can internalize portions of the outer world as people do. There is no guarantee that any structure will work when applied in some non-intended function. On the other hand, setting up systems of programs to employ this technique in coping with an uncomprehended environment is surely worth considering for any mechanized learning system.

The test of the value of such a capability is creativity. If learning from one’s own experience is a criterion of intelligence, is it not smarter to learn from another’s experience ? Such a capability is an emergent, with a few simple programming changes, of the facilities for multi-role and guarded play.

Learning Without Instruction
With the developments sketched so far, all the capabilities needed for learning from another by observation are in place. The most dramatic evidence for the accuracy of this claim in the human case comes from Lawler’s subject’s invention of a new strategy of play based upon her later analysis of a game played against herself at an earlier time [26]. In summary, reviewing a game played only to the point where she believed a draw would follow, Lawler’s subject recognized that she had abandoned the game while a single further move would have led to her winning. She then worked through the moves she had made, both as protagonist and antagonist, and convinced herself that she had created a new strategy with which to win on condition that her opponent made any one of four responses to her opening corner selection. The kinds of abilities employed in her analysis were those of multi-role play, guarded play, and specific knowledge of three sorts: of the particular game, about her own habits (starting in cell one), and procedures of play (she knew SHE would have made forced moves at need) [27].

SCENARIO 4: Analysis through synthesis
The Machine Case
What then need be added for IT to perform a similar feat of creative analysis ? When presented with an externally generated game, nothing would be easier for IT to analyze IF the order of moves were preserved. HERE the challenge is different: the set of moves to be made is prescribed, but the order is to be determined. Lawler’s subject’s game is below; the tree of possible games following after. When a string is forced into a forbidden move (one not part of the presented pattern), the branch is pruned [28].

X | O | O
X | X |
X | | O


The tree of possible games



Given as prerequisite a system that is capable of multi-role play and guarded play (the latter implies the former), the following changes need be made to existing code:

FUNCTIONS IMPLEMENTED:

    Limiting Proposed Moves to those used:
    – by intersecting possible moves with “visible” markers.
    Pruning Strings requiring forbidden moves:
    – this requirement is satisfied when a failure occurs, not by look ahead.
    Exit from model based learning to example based learning:
    – this is a hook to an additional learning routine.
    Example Based Learning:
    – a routine fixing actual ego moves of the reconstructed game as a plan.
    Quitting when done:
    – a test for exhaustion of the “visible” set of tokens.

If these seem like extensive changes, note that two of the five are control transfers on a single condition (EXIT and QUITTING), one is a control transfer to toplevel on a set membership condition (PRUNING), one is a set intersection of normally available possible moves with the given set of actual moves (LIMITING). The final change (EXAMPLE BASED LEARNING) extracts the ego-owned moves from the selected game in order (a subfunction common to all playing routines in the programs) and installs them as a list with other known plans. The basic mechanism is no more than is required to learn by instruction when shown an example — but now the instructor is no longer needed.

As a player becomes more adventurous with guarded play — willing to start in the center and various corner cells, willing to move to side cells as well — the number of winnable games possible becomes quite large. This explosion of possible won games, the fact that there is too much to remember and all the games are superficially similar, introduces the need to impose a more abstract order on the experience. Answering that need demands feature based abstraction and conceptualization, the focus of work still ongoing.

Conclusions 2

Coadaptive development of cognitive structures is central to human learning. Human studies can provide valuable guidance for machine learning work to the extent that one both can analyze mature performance and can uncover anterior structures whose reorganization permits the emergence of that mature performance. Specifically, Lawler’s case study permitted a characterization of the cognitive state of a young child (one quite congenial with the literature on young children) and more, a trace of the particular child’s path of development to relatively more mature performances. This developmental path provided significant guidance for constructing programs that model the learning behavior of the individual child. More generally, the constructed model illuminates in a computational form the elements and processes that enter into coadaptive development. The programs pass from learning by prototype modification to learning from experience by analysis without instruction.

Going beyond earlier conclusions in the human study, the discovery remarked here is the role of the multi-modal mind in creating the potential for abstraction emerging from redescription. This is an example of the functionality of coadaptation in cognitive development. The conjecture is advanced that the multi-modal structure is central to understanding the possibility of human cognitive development. Further, emerging abstraction through redescription can be appreciated as a primitive form of functional abstraction, of which reflexive abstraction is a more mature form. Redescriptive abstraction helps explain the importance of analogy and metaphor in human thinking and learning.

In this research, we have focussed only on the interaction between visual and kinesthetic systems. The other modes of mind, related to the linguistic system and and the touch-salient manipulative system, add significant further dimensions of possible complexity to this non-uniformitarian model of mind. Such models, although basically simple, are complex enough to permit interesting development through plausible, internal interactions; that is, they permit the possibility of learning through thinking — a desirable outcome for any view of human minds, and one that may prove of some value with machines as well.

References
Caple, Balda, and Willis. Work reported in “How did Vertebrates take to the air?” by Roger Lewin, Science, July 1, 1983. See also American Naturalist, 1983.
Fang, J. Towards a Philosophy of Modern Mathematics. Hauppauge, New York: Paideia series in modern mathematics, vol.1, 1970.
Fann, K. T. Peirce’s Theory of Abduction. The Hague: Martinus Nijhoff.
Jacob, F. “Evolution and Tinkering” in Science, June 10, 1977, and The Possible and the Actual. New York, Pantheon Books, 1982.
Lawler, R. Computer Experience and Cognitive Development. Chichester, England, and New York: Ellis Horwood, Ltd. and John Wiley Inc., 1985.
Lawler, R. and Selfridge, O. ” Learning Concrete Strategies through Interaction”. Proceedings of the Cognitive Science Society Annual Conference, 1985.
Piaget, J. The Child’s Conception of Number. New York: Norton and Co., 1952.
Piaget, J. Biology and Knowledge. Chicago: University of Chicago Press, 1971.
Piaget, J. The Language and Thought of the Child. New York: New American Library.
Peirce, C.S. “The Fixation of Belief” in Chance, Love and Logic. M. Cohen, ed. New York: George Braziller, Inc., 1956.
Peirce, C.S. “Deduction, Induction, and Hypothesis” in Chance, Love and Logic.
Satinoff, N. “Neural Organization and the Evolution of Thermal Regulation in Mammals”, Science, July 7, 1978.
Selfridge, M.G.R. and Selfridge, O.G. “How Children Learn to Count: a computer model”, 1985.
Vygotsky, L.S. Mind in Society. Eds. Michael Cole, Vera John-Steiner, Sylvia Scribner, and Ellen Souberman. Cambridge, Mass: Harvard University, 1978.

Acknowledgements
This paper began in a collaboration with Oliver Selfridge to extend work in “How Children Learn to Count” (Selfridge and Selfridge) with ideas of CECD. With Oliver’s genial prodding, I have carried forward that effort to confront the issue of abstraction from highly particular descriptions. Special thanks are due to Sheldon White, who first pointed out the similarity of my conclusions to those of Vygotsky. He has repeatedly emphasized the importance of ideas about the internationalization of external processes and urged me to develop them.

Publication notes:

  • Written in 1985.
  • Published in Proceedings of the European Conference on Artificial Intelligence, 1986.
  • Re-published in Advances in Artificial Intelligence, DuBoulay, Steels, and Hogg, eds. (Elsevier), 1987.

Text notes:

  1. See Publication notes above.
  2. This is a technical use of the term, following Satinoff in “Neural Organization and the Evolution of Thermal Regulation in Mammals”, 1978, and Jacob in “Evolution and Tinkering”, 1977/1982. Satinoff notes “…[M]ost, if not all, thermoregulatory reflexes evolved out of systems that were originally used for other purposes. To give just two examples of this, Cowles has argued that the peripheral vasomotor system, the basic system for changing blood flow at the surface, first served as a supplemental respiratory organ in amphibia. It then became a heat collector and disperser in reptiles (regulating the flow of heat from outside the body to inside) and finally a temperature regulatory mechanism for endotherms (regulating heat flow from inside the body to outside). Heath has argued that the change in posture from the sprawling stance of a reptile to the limb-supported posture of the therapsids, the mammal-like reptiles, and subsequently the mammals, and the consequent changes in muscular organization and muscle tension provided the basis for a high internal heat production. This illustrates the principle of evolutionary coadaptation: a mechanism evolved for one purpose has as a side benefit an adaptive value in an entirely different system.” For a profound and more general discussion of related views, see Jacob.
  3. The case material used here is presented in detail in Chapter 4 of Computer Experience and Cognitive Development (CECD).
  4. See, for example, The Child’s Conception of Number, Piaget 1952.
  5. Such an argument is detailed in Chapter 2 of Computer Experience and Cognitive Development, (CECD). The subject of that detailed case study will be referred to as “Lawler’s subject”.
  6. The detailed background for the subject and the detail of this incident are presented at pp. 120-122 in CECD.
  7. Piaget introduced this emphasis in the characterization of children’s early thought in his early book The Language and Thought of the Child.
  8. The general commitment to egocentric knowledge representation has psychological justification in this specific case. Lawler’s subject suffered the defeat above trying to achieve the victory of GAC 1 (the only strategy she knew), not attending to her opponent’s move nor anticipating any threat to her intended fork.
  9. Piaget contrasts reflexive abstraction with classificatory or Aristotelian abstraction (p.320 in Biology and Knowledge), demeaning the latter somewhat by referring to it as “simple”.
  10. This view of mind is presented and applied in “Cognitive Organization”, Chapter 5 of CECD. A more extensive discussion of microviews appears in Chapter 7.
  11. The GAC description is cast in terms of an external thing seen by the person referring to it, with no hint of an imaginary homunculus in view. Further, the absolute reference assigning numbers to specific cells preserves a top-down, left to right organization. Notice however, that even if a specific person’s internal representation were different — based perhaps on a manipulative mode of thought and representation — the essential points of following arguments remain sound.
  12. Peirce’s position (presented lucidly in “The Fixation of Belief” but ubiquitous in his writing) was the primary observation leading me to focus on on this theme. He uses the term doubt because his discussion is cast in terms of belief; mine, cast in terms of goals, finds its equivalent expression as surprise. Doubts require evidence for elimination (but see Peirce on this); surprises require explanations. Surprise is accessible to mechanical minds as the divergence between expectation and outcome under a specific framework of interpretation.
  13. The following descriptions are rather like imputing thought experiments to subjects but such with a decidely personal and everyday content; the “dramatic style” seems natural enough for people. If it seems unnatural for machines, the reason is that we do not yet provide our machines with so rich and powerfully various a collection of interacting descriptions as humans are fortunate enough to inherit from the long history of life’s evolution.
  14. Referring only to the set of markers here, we need not distinguish between the forks achieved by various plans such as [1 9 3] or [3 1 9].
  15. Under IT’s learning mechanisms, the plan [1 9 3] will generate the goal {137} via the game [1974325] or [1932745]. These goals are essentially related. REO’s move directly blocking IT’s plan leads directly to the other determinate games.
  16. IT does not look ahead, therefore IT doesn’t notice that the use of cell 1 is relevant to plan [7 3 1]. Nor does moving second inhibit the attempt to escape the frustration of cell 1 being taken because IT does not understand opening advantage; but then, neither did Lawler’s subject at age six years.
  17. The point here is that the process is more like Peirce’s abduction than any inductive process of learning. See “Deduction, Induction, and Hypothesis” for Peirce’s introduction to this distinction or K.T.Fann’s “Peirce’s Theory of Abduction” for an analysis of Peirce’s developing ideas on abduction.
  18. This episode dealt with here is neither singular nor domain specific in character. The original observations on which this view is based were about the behavior of a newly verbal infant. See CECD pp.113-115 in Chpt. 4. This issue became prominent for me through its advocacy by Minsky and through its manifest importance in empirical observations on the learning of my children. The ideas can be cast in a Freudian framework for relative simplicity of explanation. The essential idea I advance for developing self-control can be read into Freud’s description of the tripartite mind — Id, Ego, and Super-ego — which depends for development, first, on the introjection of authority figures by the child. After this introjection of an ‘other’, which we can take to be an adoption of goals of the Superego not compatible with existing goals of the Id, the Ego, by mediating interior conflicts between the Id and Super-ego, can develop control over both through virtual experiences; this permits the system of the self to become somewhat better able to deal with the disparity between the desires of the Id and the constraints of the external world.
  19. Chapter 4 of CECD argues that in the human case “whose-turn?” at play was one issue upon which judgments were made at each move to prohibit or permit the effecting of intentions in behavior. Lawler’s subject knew what she wanted to do, and when she knew also that the turn was not hers she suppressed her next intended move until it was her turn. Further, one of the ways the child cheated when she feared her plan might be frustrated was to make multiple moves in a single turn.
  20. Because my simulations, in fact, share tactical code, the internalization of REO as REO-sim is perfect. Such need not have been the case. REO could have been any arbitrarily baroque system of decisions; IT’s simulation of such an alternative REO would still be the same as described here. Allocating a part of itself to represent the other is all that IT can do. When it is successful, however, this functional re-application of existing structure is very powerful.
  21. The path is straightforward. Here is how the program works. IT can tell when it receives control out of turn. The manifest failure of rehearsal need only require that IT do something different from the next step of its own plan, which could be nothing else but making some move for the non-existent antagonist. The manifest failure of IT’s own plan application for both roles requires refined discrimination; again, a single decision to route control to either one or the other of the strategic and tactical functions based on turn taking is all that’s required for the more precise articulation of roles.
  22. The animism of the young child is not at all bizarre if his only means of understanding “the other” is through self knowledge. Like Descartes, he knows he has a mind because he thinks; he believes in his own past because of memory; and he imputes will to things because he feels the meaning of wanting.
  23. Here, guarded play began because of my experimental intervention. However, to the extent that children believe keeping their plans secret will help them win (they surely learn that by the age of seven), the development of mental play with the initial purpose of guarding plans is to be expected in general.
  24. This is done in the simplest fashion, by tail recursion of IT with the cdr of the candidate moves-list until it is empty. The branching condition for entry into IT’s handler is a data anomoly: a list of atoms is expected; when the previous game move is determined to be a list of atoms itself, “something different” must be done. If IT’s handler for this condition takes the first element of the end-list of the game and invokes itself with the game made of the prior state and the first member of that list it received, calling itself with the residue of that list will either route a path of execution in a second instantiation through normal IT processing or through the same condition handler, thus leading to further recursions and instantiations of IT. The lists created by GEN’s tree generator will be replaced by the recursively generated structure of IT’s multiple instantiations. (I do not claim people do this naturally.)
  25. This is implemented by choosing to return only a won game to the primary instantiation of IT with a throw. If only nil is returned, then no game can be won from the given initial moves. The objective implicit in GEN to fill the space of possible games is the experimenter’s objective; the objective of program IT is to return the next move for any game presented to it.
  26. The detail of this story and its analysis are presented in pp. 139-141 of CECD.
  27. Lawler’s subject’s discovery or invention of a new strategy is obviously a creative application of her knowledge, but is it appropriate to claim that it represents learning from another? The sense in which the answer is “yes” to this question is the following: Lawler’s subject used all the relevant knowledge she had. If she encountered a game by some other person, she would have been incapable of interpreting it by any other means than this very analysis. The claim then is that this is what people do when they analyze the thought of another, that this is all they can do.
  28. The process depends upon forcing as an operation of tactical play but it does not require a concept of forcing. Such a concept could however come as an explanation from mental trials such as this. Forcing is important initially less because it leads to a win, than because it is easier to think about a string than a tree.
Print Friendly, PDF & Email