Sunday, January 25, 2015

Where does the mind begin and end?

The following is a (very slightly) edited version of an essay I wrote 6 or 7 years ago. If you find the ecological mind of interest (it goes quite a bit beyond social questions, which I address here), the work of Andy Clark is well worth checking out.



"Where does the mind begin and end?"

Daughter:     Daddy, why do things have outlines?
Father:         Do they? I don’t know. What sort of things do you mean?
D:                 I mean when I draw things, why do they have outlines?
F:                 Well, what about other sorts of things-a flock of sheep? or a conversation? Do they have outlines?
D:                 Don’t be silly. I can’t draw a conversation. I mean things.

(Bateson, 1972, p. 27)

Introduction
By taking a functionalist view of the mind, this essay aims to challenge the limitation of mind to individual humans. The phrase “boundaries of the mind” could be interpreted as suggesting the question of whether or not we should draw the line at individual people, or perhaps families or groups. Psychology has certainly been accused of individualism (e.g. Tuffin, 2005). However, if the debate is framed thus it still assumes that the study of mind should be restricted to humans, when mental phenomena can actually be studied in other species and even non-organisms.
Through the history of psychology, thinkers and trends (in psychology as well as other disciplines and society at large) have facilitated the narrowing or broadening of the study of the mind.

The Mind
Pinker defined the mind simply, calling it “a system of organs of computation” (1997, p. xx). Reber & Reber (2001) pointed out that treating the concept of “mind” as being interchangeable with concepts like the soul, the self or identity only compounds the confusion.  But defining the mind is difficult, and various concepts have been conflated with the mind over time. The mind should be viewed as a set of processes, not a given process that is somehow separate from its various constituent processes (Ryle, 1949).
Wilson (2004) suggested that for something to have a mind, or at least a minimal mind, it must possess one psychological property. Furthermore, there should be some physical structure that realises this property.

Functionalism
William James, in his Principles of Psychology, espoused a functionalist approach to the mind:

The pursuance of future ends and the choice of means for their attainment are thus the mark and criterion of the presence of mentality in a phenomenon (1950, p.8).

James was influenced by evolutionary theory, and functionalism is concerned with what’s adaptive about the mind, i.e. how the mind helps thinkers adapt to and exploit their environment (Gazzaniga & Heatherton, 2003). Functionalism can be contrasted with structuralism. Structuralism is concerned with the how the constituent elements of experience in the mind. While structuralism relied heavily on introspection and focused on the study of “normal” humans, functionalism would use other methodologies and study the behaviour of animals and the mentally ill (Hergenhahn, 1997).
Searle (1992) challenged functionalism by arguing that the study of the mind is always and everywhere concerned with consciousness (though few studies may explicitly refer to it), and that a functional role is unnecessary for consciousness. While the study of consciousness is a worthwhile endeavour, Searle’s focus on the non-functional aspects of consciousness (which may not exist anywhere else but within our personal “selves”-note the risk of solipsism) positions the individual human as the sole owner of a mind.

The Individual
The roots of Western individuality, as in the focus on the individual, can be traced back as far as the Renaissance, the introduction of private property in the 13th century or even Ancient Greek philosophy. However, the emergence of psychology as an autonomous discipline coincided with a great increase in Western society’s emphasis on the individual (Jansz, 2004). The belief that the “inner self” was truer than the public self became popular between 1500 and 1800, as increasing mobility made it harder to ascertain the identity of individuals just by looking at them (Tice & Baumeister, 2001).
Wundt is often portrayed as a founding father of psychology, given his establishment of the laboratory at Leipzig (e.g. Gazzaniga & Heatherton, 2003). For a long time Wundt’s “Physiological Psychology” was emphasised to the point of ignoring his “Psychology of the People”.  The latter work discussed the importance of social and cultural phenomena that were not suitable for laboratory study, and thus did nothing to further psychology’s status as a natural science (Jones & Elcock, 2001). The lack of attention paid to “Psychology of the People” helped to position the individual as the central subject of psychology.
As a discipline, psychology’s treatment of the research participant as an isolated individual (as opposed to an individual in a particular situation) has been a methodological symptom of an individualised society, but has also in turn lent empirical credibility to this individualism (Danziger, 1990).

Mind, Body and Soul, Man and Beast
For the ancient Hebrews, the soul was whatever it was that made the body alive. The Pythagoreans argued that the soul could move from its original body to other bodies following death. Plato granted the soul further autonomy by suggesting it could exist in a disembodied state (c.f. Shaffer, 1968).
Robert Whytt, with his discovery of spinal reflexes in the middle of the eighteenth century, suggested that the soul or mind was located in the spine as well as the brain. For many philosophers, this implied that the soul and the body were closely enmeshed, and implied that simpler animals could also possessed minds/souls (Reed, 1997). William James took this idea further with his proposal that the psychological realm evolves as part of the natural world (Reed, 1997).   
Descartes, an early proponent of dualism, is sometimes portrayed as viewing animals as lifeless machines. The Cartesian evidence for having a mind is not just communication (which many species are capable of), but communication that is based “pure” thought, i.e. thought which is based on abstract symbols (Alanen, 2003). However, while Descartes argued that only humans were capable of pure thought, he was willing to admit that clocks can keep time better than humans, and other species may outwit us on occasion (Clarke, 2003). This seems to suggest that Descartes supported the non-existence of minds in non-humans only inasmuch as he took a relatively structural approach to the mind.
Nonetheless, animals and even early humans have been described quite recently as zombies, as they apparently lack(ed) language and culture (Malik, 2000). Animals may not have cultures, but animal behaviour can be predicted by noticing what an animal notices and finding out what the animal wants (Dennet, 1998).
Indeed, as the cognitive revolution in human psychology took place, there was also a renewal of interest in the study of cognitive phenomena in other species (although a strong emphasis on behaviour understandably remained) (Roberts, 1998). Although there is a lack of evidence that any other species can use language in the recursive way that humans can, the cognitive abilities of chimpanzees and young children are not very different (Harley, 2001). Where there are differences, it is worth thinking the observation of Reed (1996), who argued that philosophers frequently compare humans to other animals in terms of internal properties, when they should instead be comparing how our environments differ.

The Ecological and Social Mind
In defining the boundaries of something, it seems like common sense to ask what is inside and outside the object of study. What is inside or outside the mind? If we take away what is outside the mind, then it has nothing to act on and no subject matter for its thoughts (Gergen, 1999). From a functionalist perspective, this would negate the mind’s very existence.
Gergen (1999) argues that the mind is inseparable from relationships if we firstly treat psychological discourse as performative and secondly embed performance in relationships. From this point of view thinking is just a public act carried out in private. Like Malik (2000), Gergen implies that thought is a linguistic act, and that mental activity is driven by interpersonal relationships.
Many critical social psychologists argue that identity is context-dependent. They reject the essentialist view and advance the idea that identity is not possessed but rather negotiated, and should be studied by investigating discourse (Tuffin, 2005). What sets the (relatively new) practice of discursive psychology apart from psychology as natural science is that it does not treat the individual “subject” as being representative of universal human truths (Tuffin, 2005). Psychology as natural science implicitly assumes that the mind is located within the individual, and that findings on the mind of an individual can be generalised to the minds of other individuals.
Discursive psychology focuses on how language and speech acts behave, and does not offer a window onto the workings of the mind (Parker, 2002). A comparison could be drawn with Behaviourism, in that both practices avoid the use of cognitive concepts in describing human interaction. Thus, an ecology of identity does not necessarily support an ecology of mind, but rather can undermine examination of the mind altogether.
Mind may extend to our ecological environments, but what about our social groups? Emphasising the needs or importance of groups is not the same thing as saying that they have minds.
Wilson (2004) suggests that the group mind hypothesis, as he calls it, can either be taken as the literal proposition that groups can have minds or as a cognitive metaphor. The cognitive metaphor stance suggests that although groups don’t actually have minds, they act sufficiently like individuals to be treated as though they did. (Note that this framework still treats individuals as the paradigmatic boundary of the mind).
Communities of insects are considered to be “super-organisms”. The individual organisms do not understand the aims of the group. The term “super-organism” cannot easily be applied to human groups, however: the individuals within the group are typically aware of the aims of the group, even though they may only play a minute role in achieving such aims (Wilson, 2004).
In other disciplines one can see the concern with the validity of reducing the decisions etc. of the group to the aggregate of those of the individuals who make up the group. In macroeconomics, for example, the Keynesian School argues that investors are prone to “herd behaviour”, with the herd acting as a metaphorical group mind. In contrast, the New Classical School proposes that people react to the prevailing economic conditions as autonomous individuals, so the unemployment rate, G.D.P. etc. can be explained solely in terms of the thoughts and actions of individuals (Snowden & Vane, 2005).

The Artificial Mind
The “mind as computer” metaphor has pervaded psychology for at least four centuries (Reed, 1996). Conversely, do computer have minds? Functionalism argues that functionally identical acts can be carried out by different physical entities (Bem & de Jong, 1997), so theoretically there is no reason why silicon can’t do the work of flesh and blood.
Strong artificial intelligence is the hypothesis that a computer which carries out a cognitive task is, in fact, thinking (Kukla & Walmsley, 2006). Structurally, the brain is in many ways different from an artificial computer (c.f. Jaki, 1969). However, the Turing machine demonstrates that thought can be carried out computationally (Pinker, 1997).
Kukla et al. (2006) argue that, from a functionalist perspective, strong artificial intelligence is supported if a computer can perfectly simulate human cognition. This argument is quite anthropocentric, in that it assumes that the carrying out of a cognitive task is an insufficient demonstration of mind unless it is done in exactly the same way as a human would. Surely a truly functionalist approach should support the existence of some kind of mind in anything that can carry out a given cognitive task, regardless of how it goes about it? Turing machines do not have the same structure as the brain (Dawson, 1998). Turing’s point was not to demonstrate that a machine can think in exactly the same way humans do. Rather, it was that if the machine can fool a human into thinking it’s a human mind in a blind test, then it has mind (Dennet, 1998).
Indeed, if we are concerned with what is adaptive about mind, we should focus on what constitutes adaptiveness for beings with artificial intelligence. Meyer (1996) argues that a behaviour is adaptive for a robot if it allows the robot to continue performing the function for which it was designed. The workings of swarms of social insects have influenced robot designers to produce systems of robots that work together without centralised control or explicit communication (Kube & Zhang, 1993).
A classic argument against strong artificial intelligence is attributed to Lady Lovelace, who complained that Charles Babbage’s “analytical engine” could only generate ideas that it was programmed to. Turing (1950) refuted this view with two points. Firstly, humans require teaching in order to produce “novel” ideas, so it is inconsistent to set a higher criterion for mind in computers, given that it is assumed that humans have minds. Secondly, according to Turing, producing novel ideas is simply a matter of engineering to make a computer produce more than one idea for a given input. This may be setting the bar somewhat low, as the programmer may be able to predict the two or more outputs perfectly, but it is does require a great stretch of imagination to see how a computer can generate random outputs but be programmed with somewhat “wooly” criteria for which of these outputs to select as most aesthetically appealing.

The Man with Two Brains
Perhaps the human brain contains more than one mind. While one feels a cohesive state of consciousness at any given time, there is no one brain structure that is responsible for consciousness (Dennett, 1991, Greenfield, 1999). It is possible that there is no central organiser within the brain that creates a “unified” mind, but rather individual subsystems acting together as a sort of superorganism. As the environment impinges on one, a “majority” view is reached among the subsystems, and this view is expressed in overt behaviour (Dennett & Humphries, 1998). This is perhaps a more structuralist argument against the idea of one person, one mind. 
Following corpus colostomy, split-brain patients can only articulate pictures shown to the right eye, as the left hemisphere controls speech (Gazzaniga & Heatherton, 2003). It would appear that the two halves of the brain have thoughts of their own. For the split-brain patient, the brain does not produce a mind, but two minds!
Of course, one should try to avoid excessive reductionism. An individual neuron either fires or doesn’t, and so lacks psychological properties in the sense of Wilson’s (2004) “minimal mindedness” criterion. Nonetheless, it has been argued that the neuron should be considered a computer, given the variety of inputs it may have to process (Jaki, 1969).

Discussion
The discipline of psychology has emerged and developed beside other emerging and established disciplines such as computing, theology, philosophy, economics and ethology, and these disciplines have influenced one another. In the Western world, individualism in society as well as its sciences and philosophies has risked restricting the location of mind.
Although thinkers such as Block (1978) have suggested that functionalism may lead us to too liberal a view of mind, I believe that most people’s definition of mind is excessively illiberal. Functionalism cannot give a complete account of the human mind, or one that will satisfy our desire for an account of consciousness. However, if one accepts that pursuing some aim is a sign of mind, then it becomes clear that restricting the concept of mind to individuals, humans or even organisms is misguided.

References
1.      Alanen, L. (2003). Descartes’s Concept of Mind, Cambridge: Harvard University Press.
2.      Bateson, G. (1972).  Steps to an Ecology of Mind, Aylesbury: International Textbook Company Limited.
3.      Bem, S. & de Jong, H.L. (1997) Theoretical Issues in Psychology, London: Sage.
4.      Block, N. (1978). Troubles with functionalism. In Savage, C.W. (Ed.) Minnesota Studies in the Philosophy of Science. Minneapolis:University of Minnesota Press.
5.      Burge, T. (1986). Individualism and psychology. Philosophical Review, 95 (1), 3-45.
6.      Clarke, D.M. (2003). Descartes’s Theory of Mind, Oxford: Oxford University Press.
7.      Danziger, K. (1990). Constructing the Subject, Cambridge: Cambridge University Press.
8.      Dawson, M.R.W. (1998). Understanding Cognitive Science, Malden: Blackwell.
9.      Dennet, D.C. (1991). Consciousness explained.
10.  Dennet, D.C. (1998). Brainchildren, London: Penguin.
11.  Dennet, D.C. & Humphries, N. (1998). Speaking for our selves. In Dennet, D.C. (Ed.) Brainchildren, London: Penguin.
12.  Gazzaniga, M.S. & Heatherton, T.F. (2003). Psychological Science: Mind, Brain and Behaviour. New York: W.W. Norton.
13.  Gergen, K.J. (1999). An Invitation to Social Construction, London: Sage.
14.  Greenfield, S. (1999). Soul, brain and mind. In M.J.C. Crabbe (Ed.) From Soul to Self (pp.108-125). London: Routledge.
15.  Harley, T. (2001). The Psychology of Language, Psychology Press: Sussex.
16.  Hergenhahn, B.R. (1997). An Introduction to the History of Psychology (3rd Edition), California: Brooks/Cole.
17.  Jaki, S.L. (1969). Brain, Mind and Computers, Indiana: Gateway Editions.
18.  James, W. (1950). Principles of Psychology, New York: Dover Publications Inc.
19.  Jansz, J. (2004) Psychology and society: an overview. In J. Jansz & P. Van Drunen (Eds.) A Social History of Psychology (pp. 12-44). Malden: Blackwell.
20.  Jones, D. & Elcock, J. (2001). History and Theories of Psychology: A Critical Perspective, London: Arnold.
21.  Kube, C.R. & Zhang, H. (1993). Collective robotics: from social insects to robots. Adaptive Behaviour, 2 (2) 189-218.
22.  Kukla, A. & Walmsley, J. (2006). Mind: A Historical and Philosophical Introduction to the Major Theories, Cambridge: Hackett.
23.  Malik, K. (2000). Man, Beast and Zombie, London: Weidenfeld & Nicolson.
24.  Meyer, J. (1996). Artificial life and the animat approach to artificial intelligence. In M.A. Boden (Ed.) Artificial Intelligence (pp. 325-354). San Diego: Academic Press.
25.  Parker, I. (2002). Critical Discursive Psychology, Hampshire: Palgrave- Macmillan.
26.  Pinker, S. (1997). How the Mind Works, London: Allen Lane The Penguin Press.
27.  Reber, A.S. & Reber, E.S. (2001). The Penguin Dictionary of Psychology, London: Penguin.
28.  Reed, E.S. (1996). Encountering the World, Oxford: Oxford University Press.
29.  Reed, E.S. (1997). From Soul to Mind, New Haven: Yale University Press.
30.  Roberts, W.A. (1998). Principles of Animal Cognition, Boston: McGraw-Hill.
31.  Ryle, G. (1949). The Concept of Mind, London: Hutchinson.
32.  Searle, J.R. (1992). The Rediscovery of Mind, Cambridge: MIT Press.
33.  Shaffer, J.A. (1968). Philosophy of Mind, New Jersey: Prentice-Hall.
34.  Snowden, B. & Vane, H.R. (2005). Modern Macroeconomics: Its Origins, Development and Current State. Cheltenham: Edward Elgar.
35.  Tice, D.M. & Baumeister, R.F. (2001). The primacy of the interpersonal self. In C. Sedikides & M.B. Brewer (Eds.) Individual Self, Relational Self, Collective Self (pp.71-88). Philadelphia: Psychology Press.
36.  Tuffin, K. (2005). Understanding Critical Social Psychology. London: Sage.
37.  Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59 (236), 433-460.
38.  Wilson, R.A. (2004). Boundaries of the Mind: the Individual in the Fragile Sciences: Cognition. Cambridge: Cambridge University Press.  



No comments:

Post a Comment