THE ANALOG/DIGITAL DISTINCTION IN THE PHILOSOPHY OF MIND

  title page

  intro

 I. 

  II.

  III.

  IV.

  V.


V. Languages of Description

The issue is within what class of systems should a description of intelligent systems be sought. (Newell, 1983, 198)

A digital computer can be described as a physical system and as a logical system. Two modes of discourse are involved: the modes of discourse we have developed to speak about the behaviors of dynamical systems and the modes of discourse we have developed to speak about operations in formal systems. The computer metaphor of intelligent function brings with it the possibility of both kinds of description. Classical cognitivism chooses the languages of programs with their rules and symbols. Connectionism chooses dynamical description. There are two kinds of questions we can ask about these choices. One is this: does it matter what class of system we use to describe intelligent behavior? The other is this: does it matter what class of description we say the intelligent system itself is using? These two questions are obviously quite different but they are not always discriminated in polemical practice.

Our answer to the first question can hinge on several kinds of motivation. We may say it matters for methodological reasons - one class of system has greater explanatory adequacy to the data, stimulates more interesting research. Or we may say it matters for reasons of disciplinary alliance - it is always pleasant if research funds and ideas do not migrate away from the formalisms in which we are already competent.

What are the alliances and competencies associated with both sorts of descriptive frame? Logical functionalism has historical links with the movements to axiomatize mathematics and logic: its style and explanatory kinds are those of set theory, formal language theory, predicate calculus. Programming languages like LISP, which are procedural languages, are still thought of as strings of conditionals naming conditions and operations. Connectionists and connectionist neuroscientists are looking for explanatory possibilities in many language systems - electrical engineering, general systems theory, nonlinear systems theory, thermodynamics, high-dimensional geometry, topology, Fourier analysis, holographic theory. There is an evident desire to recast the terms of computational psychology so they will be more plausibly relevant to biological and phenomenological conversations, and to do it without losing the possibilities of modeling precision offered by mathematical languages.

We have seen some of the ways mathematical description has been useful to connectionists and neuroscientists. When Pribram says "It is my conviction that it is only through the use of these formalisms that the psychological and neurological levels of inquiry regarding perception can become related" (1991, xvi), what he has in mind may be something like Marr's sense that figural perception - a psychological-level task - is constrained by optical facts - facts for which we have a developed mathematical theory. If we manage a mathematical theory of neural function, we may be able to link our theoretical levels.

Marr, Pribram, and connectionists like Paul Churchland tend to speak as if the nervous system is 'doing math' - computing a difference-of-Gaussians, arriving at Fourier coefficients, performing matrix multiplication. Are they implying that brains are using equations, and is this akin to Pylyshyn's saying brains use sentences? Not necessarily.

There is no more contradiction between a functional description of an ensemble of cortical cells as performing a spatial frequency analysis and their having receptive fields with certain physiological properties than there is between a functional description of some electronic component as being a multiplier and its being made up of transistors and resistors wired in a certain fashion. The one level functionally describes the process, the other states the mechanism. (DeValois and DeValois, 1988, 288; cited in Pribram, 1991, 269)

"Functionally describes the process" seems to me to be the right way to say it: we have a physical process; we are describing it; and our description is given in the mathematical form of a function. If, instead, we were talking about 'implementing' or 'instantiating' a function, we would inherit some of the ambiguity attendant on terms that do not discriminate between a relation between two linguistic terms and a relation between a thing and a term. 'Implementation', for instance, covers many sorts of situation. We can implement an idea, a design, an algorithm, an equation, a language, a program, or a computer type; and we can implement it into a more detailed design, a class of physical devices, a methodology, a programming language, an algorithm, an equation. We 'instantiate' types into tokens, but we do not distinguish between the physical token and its linguistic function. So, if we were to talk about cortical cells 'implementing' or 'instantiating' equation E, we could mean either that E is a good mathematical description of their physical behavior, or that they physically embody the syntactic elements of equation E, the way bistable switches in a digital computer can be seen as physically embodying O's and l's.

There is a further difficulty with 'implementation'. The term is at home in hardware and software design contexts, and there it is used when we have a design allowing more than one alternative in how a function is to be performed. We implement a multiplier either by successive additions and shifts, or by table look-up, for instance. One of the inevitable implications of the terms is top-down designation: we implement one programming language into a lower level one, or we implement a design specification into a hardware design. We do not have an equivalent term for the relation of lower- to higher-level descriptions where computational systems organize themselves from the bottom up. 'Implementation' won't do, because it imports a sense of top-down organization which may additionally be seen as the syntactic organization of the formalism expressing that organization.

If we hold on to our sense of the way a mathematical description "functionally describes a process", we can look again at theories that describe intelligent behavior in terms of logical systems. Pylyshyn and theorists like him have drawn on Chomskian notions for their sense of intelligence as rule-using. Boden makes the point that Chomsky himself is not making their sort of claim.

Chomsky's formal rules were not programs, nor were they intended as specifications of actual psychological processes. Rather they were abstract descriptions of structural relationships. They were 'generative' in the timeless mathematical sense, whereby a formula is said to generate a series of numbers, not in the sense of being descriptions of a temporal process of generation. Similarly, his 'transformations' were abstract mappings from one structure to another as a square-root function transforms 9 into 3, not actual psychological changes or mental events. Likewise, his 'finite-state and non-finite machines' were mathematical definitions (as are Turing machines), not descriptions of any actual manufactured systems that might conform to those definitions. (Boden, 1988, 4)

"Abstract descriptions of structural relationships" in observed behavior are like descriptions of a neural cell ensemble as computing a difference-of-Gaussians. Both are functional descriptions of a properly functionalist kind:

The programmer attempts a general proof that results of this class can be computed by computational systems of this form, given certain specific constraints (which may apply to naturally evolved psychological systems). Indeed, there may not even be a program, but only an abstract analysis of the information-processing task in question. Such theorists agree with Chomsky in stressing the what of computation, rather than the how. Accordingly, they may use the term 'computational' to refer not (as is more usual) to computational processes, but to the abstract analysis of the task facing the psychological system - irrespective of how it is actually performed. (Boden, 1988, 7)

If we take logical functionalists as speaking of a what rather than a how of intelligence, then we lose the need to object to rules and symbols. A rule is just what we call a processing regularity if we are speaking the language of logic. A symbol, if we are speaking the language of logic, is just a causally-significant processing nexus. Thought of this way, we have no trouble giving connectionist computation a logical description. Symbols will be distributed excitation patterns. Rules will be information processing interdependencies deriving from connective topology and weighting. Algorithms will be descriptions of the progressive self-organization of a network of computational units. Computation will be non-sequential cooperative equilibrium-seeking alterations of patterns of activity.

Two characteristics of connectionist computation help us make the transition to logical description that does not imply the use of digital-style symbols. One is the lack of storage in connectionist machines. Digital machines store copies of digital symbols in storage locations; but connectionist representation is re-evoked, not copied, stored, shunted, or retrieved. The second is a view of data-intake that is not limited to one superficial array. A cascade through layers of a net can be seen as continuing to extract higher-level input features all the way along its progress. This gives us Gibson's sense of the richness of environmental information--a richness it has in conjunction with the complexity of the structure that responds to it. If we are not positing impoverished input, then we do not need superficial transducers supplying elementary symbols from which a description will be deduced.

Connectionists have been more nervous than they need to be about the possibility that connectionist processing might be describable in logical language. They are wary of the likelihood that, if connectionist computation can be described in a logical language, classical cognitivists who equate rule-describable systems with rule-using systems will describe connectionist processes as implementing symbols and procedures of the sort implemented in digital machines. As a result they have put quite a lot of effort into arguing that connectionist systems have no elements that can be supplied with a semantic interpretation , or that they have no rules, only "simultaneous soft constraints". Dreyfus provides an argument of the former kind:

Most nodes, however, cannot be interpreted semantically at all. A feature used in a symbolic representation is either present or not. In a net, however, although certain nodes are more active when a certain feature is present in the domain, the amount of activity not only varies with the presence or absence of this feature but is affected by the presence or absence of other features as well. (Dreyfus, 1988, 328)

And Tienson and Horgan of the latter kind:

Models capable of solving these problems will not even be describable by cognitive level rules. For if the system underlying, say, basketball skills could be described by such rules, then a classical rules and representations model of the same system could be produced in which these rules (or purely formal isomorphic ones) are explicitly represented as a stored program. (Tienson and Horgan, 198-, 104)

What is curious about these arguments is that they seem implicitly to accede to the strange assumption that if a system is rule-describable then it must be rule-using. If they try to defeat the classical cognitivists by demonstrating that connectionist computation is not rule-describable - or not simulable in some fashion on a digital computer, which amounts to the same thing - then they are also commiting themselves to saying it can have no systematic description at all. And this would have to include mathematical description.

They may want to say, as Robert Rosen (1991) does, that our present forms of dynamical description are not adequate to the functional complexities of systems that not only organize themselves but build themselves. Rosen argues that both logical and Newtonian dynamical descriptions are fit only to model closed systems, systems in which static elements are pushed around by unchanging forces. Biological systems, though, are high-level open systems, less dependent on substantive or energy linkages with the world and more dependent on their own energy organization, which provides large stores of potential energy ready to be triggered by input that in energy terms is insignificant.

(There is a distinction in logical type, Rosen says, between mechanistic, energetic-logical, closed system explanation, and open-system explanation in terms of self-organization and self-construction. Biological organisms are more complex than mechanical systems and, although there can always be simple models of complex systems, the category of all functional models of organisms is larger than, and includes as a subcategory, the category of all models of mechanisms, including machines. Human cognition, as a biological function, will also be more complex than any sort of machine function. Rosen's position, which emerges from a radical reenvisioning of the epistemology of science, and which would see Newtonian physics subsumed within a more comprehensive physics of organisms, supplies an illuminating broadly-based objection to the computational hypothesis as it has been imagined up to now. )

I have hinted throughout this essay that 'analog' and 'digital' are not logically symmetrical, that digital processes are a subclass of analog processes. There is a sense in which this is not true. Digital computers make up one of three classes (the other two are analog computers and connection machines) of contemporary computational technologies: digital processes may be a subclass of analog processes, but digital technologies are not a subclass of analog technologies - as technologies they have stood as equal alternatives.

But there are other senses in which a hierarchical relation is plain. The elements of discrete mathematics are subsets of the elements of continuous mathematics. Linguistic behaviors of organisms are a subclass of intelligent behaviors. Rule-governed behaviors of computational machines are a subset of law-governed behaviors of computational machines. Logical systems are a subclass of dynamical systems. Cultural behaviors are a subclass of natural behaviors. Wilden makes the point in the following way:

In considering further the communicational and socioeconomic categories of relationship often obscured by the term 'opposition', one can discern a developmental and dialectical sequence of possibilities, beginning in (analog) DIFFERENCE, moving to digital DISTINCTION (which may or may not involve levels), and thence to the BILATERAL OPPOSITION in which each item is either actually of the same logical type as the other, or treated as if it were. (Wilden, 1980, 509)

What are the analog differences that make possible digital distinctions? How do they become digital distinctions? I have described codes - which are subclasses of languages - as being made possible by an output transduction of linguistic decisions over a depth of non-linguistic organization. In the connectionist picture, it is the partitioning of cognitive phase space - the significant organization of the net - that provides the semantic reservoir upon which digital/symbolic code elements are floated:

A specific image or belief is just an arbitrary projection or slice of a deeper set of data structures, and the collective coherence of such sample slices is a simple consequence of the manner in which the global information is stored at the deeper level. (P.M. Churchland, 1989, 109)

If 'digital' is, in general, a subclass of 'analog', how do we come to symmetrize what in fact is a hierarchical relation? Political and social motivations have something to do with it, but how does it escape notice? Wilden says dyadic contrasts which result in paradox or puzzlement do so because a third term remains unseen.

Dyadic oppositions of the same logical type are products of mediated - triadic - relations in which the third term commonly remains unrecognized. This is all the more true when the purported 'opposition' is between terms of distinct logical types. (Wilden, 1980, 510)

If this is true for the analog/digital distinction, what third term have we been neglecting? My guess is that we have been misdescribing transducers. It is as if the value of connectionist models is that they demonstrate a way to think about transduction as a passage from physical pattern to physical pattern. There can be cognitive behavior - we can be intelligent - not because transducers supply us with an exit from the physical, but because the physical itself, in virtue of its origin and structure, its complex function in complex creatures, organizes us into intelligence.

Conclusion

Computational psychologies which base themselves on some form of the computer metaphor of cognitive function have had access to two classes of explanatory system: a linguistic functionalism associated with digital computation and based on the modes of discourse we use to describe operations in formal systems, and various mathematics-based functionalisms originally associated with analog computation and based on the modes of discourse we have developed to describe the behaviors of dynamical systems.

I have argued that what I have called linguistic functionalism is compatible with two possible understandings of what we might mean by linguistic kinds as realized in brains. One is a construal which sees the physical states of the computer as realizing a function, just in the sense that their input-output transformations may be accurately described by means of that function. In a generous construal of this sort we may, if our explanatory entities are linguistic entities, identify causally significant computational substates with symbols, and processing regularities with rules. This is a construal that would see brain events as rule- and symbol-describable without carrying its functionalist metaphor into speculation about neural realization. A psychologist offering a computer simulation of some sequence of psychological causation would, on this construal, take the program as offering a linguistic description of a physical event and not a linguistic description of a linguistic event.

The other sort of understanding we might have of the relation of linguistic functionalism to brain function is what I could call a hard construal, which is based on a similar construal of code-processing in digital computers. This construal takes the brain to be realizing the formula as well as the function given in the program's model of psychological causation. In other words, the hard construal takes brains as rule-using as well as rule-describable; processing would depend on rules being present in the brain as program strings - as prescriptive inscriptions of some sort.

Analog computation is computation which clearly realizes functions, but which as clearly cannot be seen as doing so by realizing formulae, just because it lacks the syntactic requirements for notational systems. Because computational results are achieved without any involvement of the sorts of time-sampling, quantization or thresholding that allow digital computers to be seen as code-using systems, the existence of analog computers allows us to separate the notion of computation from the notion of code-using computation. Non-code-using computation is transparently computation achieved by means of the causal systematicities of the physical machine. Code is involved in analog processing, but in a way that is easily seen to be external to the operation of the machine itself - it is involved just in the sense that an analog computer set-up cannot be used either as a functional analogy or as a working analogy of some other physical system, except by means of a description which is common to both systems, which we provide, and which is usually expressed by means of some code.

A computational system cannot be seen as a code-using system unless it provides discrete signals that may be identified with the discrete, nonfractionable, context-independent syntactic elements necessary to notational systems. But the existence of discrete signal elements also does not imply that a computational system must be a code-using system. As Lewis observes, there may be discretized systems that operate in ways which are not significantly different from the operation of analog computers - in both, the computational work may be done straightforwardly and transparently by means of the given physical configuration of the machine, so that positing the operation of an internal code would be redundant. Pribram suggests that discrete signal elements may also have kinds of computational relevance other than suitability to the assignment of code values; they may for instance be used to linearize a dynamical system. Discrete signal elements, then, are a necessary but not a sufficient condition for code-using systems.

Connection machines may have either continuous or discrete processing nodes, but because the thresholds of two-state nodes have statistical rather than logical effect, connection machines, like analog computers, are usually given non-linguistic descriptions. Since the non-linguistic functionalisms do not mention code at all, we avoid both the generous and the hard construal of linguistic functionalism. But if we do wish to invoke linguistic functions - and we are certainly free to do so, since we can give functionalist components any sort of name we like - we can call connectionist activation patterns 'symbols' and processing regularities 'rules'. But doing so does not license us to assume that these functionalist 'symbols' are processed with the intermediacy of the sort of structure-sensitive 'rules' that in a digital computer are entered as data strings that instruct a central processor. Because there is no central processor in a connectionist system, we can see connectionist 'symbols' as processes but not as processed: they are active patterns with computational significance. Their activity embodies the 'rules', but they are not submitted to rules conceptually separable from their activity. Since this is so, we have no particular reason to want to describe the system as code-using. As is true of analog computers, the computational work of the system is accomplished transparently and directly by means of the organization of the materials of the physical machine.

We do not know whether or to what extent brain processing is controlled by something like a central processor or processors, nor do we know very much about how linguistic input can function to organize brain events in a top-down manner. We do know that we can respond to verbal instructions given by other people, and that we sometimes remember verbal instructions ("Now I'm supposed to add two beaten eggs") in the course of performing some task. Like digital computers, we can be programmed by linguistic means. So it does sometimes make sense to say that we are rule-using systems. It can be argued that this sort of rule following behavior indicates a general capacity for storing rules and for using them to guide centrally-initiated behaviors. But we are also able to 'store' non-linguistic sequences and use them to guide complicated non-linguistic behaviors. A trained dancer, for instance, can take in a long sequence of complex movement demonstrated once, and reproduce it either on the spot or sometime later. This argues for a general capacity not so much to follow rules as to register a perceived circumstance and to re-evoke it at will. Linguistic instructions are, after all, perceived circumstances like any other, and if we can re-evoke the sight of the Downs at sunrise it is not surprizing that we can also re-evoke a sentence in a recipe.

A digital computer is described as rule-using because the physical instantiations of strings of code reset switches in the central processor and in memory registers. Because digital machines are general-purpose machines, these programmed switch-settings are essential to the computational viability of the machine. They in effect provide the machine with the physical organization that gives it the causal systematicity necessary to computational relevance. The computational organization of the machine is thus code-mediated in an obvious and thorough-going way. I have argued that the computational states of the digital computer have extrinsic content supplied by our provision of this interface with a code whose interpretation is arbitrary with respect to the machine itself.

Brains are devices whose computational organization is created not by externally-supplied strings of instantiated code, but by their intrinsic phylogenetic and ontogenetic calibration to a structured environment. Where the primary computational organization of a device is intrinsic to the device and is not externally supplied, there is no explanatory role for an internal code. Explanations in terms of code would be generally redundant, except perhaps in the specific instances where code is involved as perceptual data.

I will emphasize this point: we require explanations in terms of code when we are dealing with general-purpose digital computers whose physical organization lacks the systematicity required for computational relevance until we input certain program strings of instantiated code. We do not require explanations in terms of code when we are speaking of analog computers or connection machines whose computational organization is either hard-wired or practice-related. We also will not require explanations in terms of code when we are speaking of brain processing, just to the extent that the brain's basic computational organization is either hard-wired or practice-based.

We can of course still posit code if we intend what I have called the generous construal, by which 'symbols' are equated with representationally relevant activation patterns, 'rules' with typical computational sequences, and 'algorithms' or 'programs' with high-level descriptions of computational events. But there are, I think, good reasons to be cautious in our use of even this softer construal. I will list them briefly.

(1) General talk of an inner code tempts us to conflate a description with the dynamical process it describes - a category error leading to puzzlement and forms of mind-body split.

(2) Describing intelligence in terms of a formal language imports a political bias in favor of the sorts of intelligence which are skills with formal language activities. It leads us to leave out of the account large areas of important and interesting cognitive ability.

(3) Describing cognition as rule-governed supports a hierarchical picture of personal and social order, wheras the connectionist paradigm supplies an image of autonomous, cummulative self-organization.

(4) A logic-based description of cognition tends to misdescribe human cognition as organized primarily in relation to the principle of non-contradiction. A dynamical picture of intelligence makes it easier to notice that cognition is variable, context-dependent, multidimensional, multifunctional, possibly contradictory and inherently biological.

 

 

next