The heuristic of decomposition and localization
Petri Ylikoski, Philosophy of Science Group, Department of Philosophy, University of Helsinki
Localization has an important, but often misunderstood, role in brain research. Based on an account given by William Bechtel and Robert Richardson, this paper presents how localization and decomposition serve as research heuristics in biological research. The aim is to give a mechanical explanation for the behavior of a biological system in terms of the functions performed by its parts and their interaction. The research can proceed either in a bottom-up or top-down direction, but in either case it assumes that the system under study is nearly decomposable: the causal interactions within subsystems are more important in determining component properties than the causal interaction between subsystems. Decomposition allows subdivision of the explanatory task so that it becomes manageable and the system intelligible. As in all heuristic assumptions, this assumption can also be wrong. However, it has an integral role in the research strategy that has served biological research well until now. Furthermore, this research strategy involves much more than just making hypotheses about the localization of brain functions. In fact, the often ridiculed direct localizations - à la phrenology - are not representative of this research strategy: a failure of direct localization is more informative than its success from the point of view of mechanistic explanation.
Localization has an important, but often misunderstood, role in brain research. The recent surge in brain research employing various imaging techniques (especially PET (Position Emission Tomography) and fMRI (Functional Magnetic Resonance Imaging)) has also brought about much critical discussion of these studies (Uttal 2001, Henson 2005, Coltheart 2006). Much of this critique is justified: too many studies draw too hasty conclusions based on very thin evidence, and some studies are burdened by methodological problems related to statistical analysis or replicability. These problems are quite expected when a research field is young and going through a phase of rapid expansion. However, some of the criticism is based on the idea that the whole project of localizing cognitive functions is misconceived. This paper will address these arguments and aims to show that these arguments are misconceived and based on a misunderstanding. In the following, I will describe localization as a part of a plausible research heuristic for biological research. My account will draw heavily from William Bechtel's and Robert C. Richardson's book Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research (1993).
Let us start with the notion of research heuristics, which originally comes from Herbert Simon. Scientific research can be regarded as human problem solving. Scientists are trying to answer questions about the phenomena they study. Some of the questions are descriptive (what, where, when, and how much questions) and others are explanation seeking (why and how questions). Researchers attempt to find answers to these questions, but they face a couple of fundamental challenges. First, their information-processing capacities are limited. They cannot just go through all logically possible alternatives and pick the right one: they have to use various sorts of shortcuts in their research. Second, especially in explanatory tasks, the search space is not well defined; neither the criteria for a correct solution nor the means for attaining it are clear. Scientists need some guidance to tell them which questions to ask first and what the answers to these questions would look like. This guidance comes in the form of heuristics (Bechtel & Richardson 1993:11-14).
Heuristics are strategies and principles used in research that are characterized by three properties: they are fallible, systematically biased, and efficient. Fallibility means that the use of the heuristic does not guarantee a result, nor that the result one obtains is the correct one. Systematic bias means that the heuristic involves substantial domain-specific assumptions. These assumptions limit the search space radically, but also put a limit to the applicability of the heuristic: it will produce systematically wrong answers when the underlying assumptions do not hold. Stronger assumptions about the phenomenon make the heuristic faster and more efficient, but they also make the area of application more local (Bechtel & Richardson 1993:14-16).
According to Bechtel and Richardson, localization is an important part of a successful research heuristic for biological research. This heuristic has been used successfully in biochemistry, cell biology, genetics, and neuroscience in the search for mechanistic explanations of systemic behavior, and now it is employed in the study of cognition. The earlier success does not guarantee that the heuristic will work in a new area. It is possible that it has reached the limits of its working assumptions. However, the earlier success of a research strategy is better than nothing, especially when a concrete alternative is difficult to conceive. This is important as the essential ingredient of a heuristic is a set of suggestions for doing research. It tells the researcher what questions to ask and how to deal with the answers. (Bechtel & Richardson (1993:147, 172, 195) present this advice in the form of flow chart.) Without this advice, scientists would be at a loss.
The next step in understanding localization is the concept of mechanistic explanation. This notion does not refer to explanations that derive the behavior of the object from the principles of classical mechanics. Nor does it refer to the idea that all things are essentially mechanical devices. Rather, it is an idea about causal explanation. A mechanistic explanation accounts for the behavior of a system in terms of the functions performed by its parts and the organization of their interaction. According to a recent definition by Bechtel and Abrahamsen (2005:423):
A mechanism is a structure performing a function in virtue of its component parts, component operations and their organization. The orchestrated functioning of the mechanism is responsible for one or more phenomena.
A crucial feature of a mechanistic explanation is that it often involves more than one level of organization. This makes it possible to use two complementary research strategies. In a bottom-up approach one tries to identify the components of the system, determine how they operate, and build an overall account of the mechanism based on this knowledge. In a top-down approach one starts with the capacities of the system and tries to envision how the system might be organized so as to carry out the particular task. This approach makes use of functional analysis: it analyzes the tasks performed by the system into subtasks and their organization. The common challenge is to find a way to identify and describe the components and their organization in a manner that allows one to see how the system manages to generate the observed behavior. The more complex the system, the more complex are the interactions between the components and the more difficult it is to find the right way to describe the process (Bechtel & Richardson 1993:17-23).
The crucial assumption in this research strategy is the idea that the system is nearly decomposable. Again, this concept comes from Herbert Simon. According to him, a system is nearly decomposable when the causal interactions within subsystems are more important in determining component properties than the causal interactions between subsystems. Near decomposability is important because it allows the subdivision of the explanatory task so that the task becomes manageable. According to Bechtel and Richardson (1993:27):
Humans cannot use information involving large numbers of components or complex interactions of components, and even when the problem tasks are computationally tractable, human beings do not approach them in this way. Complex systems are computationally as well as psychologically unmanageable for humans.
The decomposability makes the system understandable and the way in which it works intelligible. The researchers can concentrate on one subsystem at a time and then figure out how their interaction brings about the behavior to be explained. As in all heuristic assumptions, this assumption can also be wrong. If the system fails to be nearly decomposable, it is an open question whether limited beings such as humans can understand how it works. Luckily, most biological systems have been found to be nearly decomposable enough, and it has been possible to achieve a greater and deeper understanding of their workings (Bechtel & Richardson 1993:23-31).
The big question is whether cognition can be understood in this same way. This issue cannot be solved by abstract theoretical arguments. The only way to find the answer is to attempt to give mechanistic explanations of cognitive phenomena. If these attempts fail again and again, we should start worrying whether the assumption of near decomposability is justified. However, we are not at that point yet. Cognitive neuroscience is such a young research field that constant failures are to be expected. Near decomposability does not mean that understanding the system is easy; it only means that it is possible.
After these preliminaries, we finally arrive at the idea of localization (Mundale 2002). The bottom-up approach aims to produce a structural decomposition of a system. It analyzes the system to its component parts. The top-down approach aims to give a functional decomposition of the system. It analyzes the task performed by the system into its component operations (sub-tasks). The localization links a component operation with a component part. It is a hypothetical identification, which says that a certain component part is responsible for carrying out a certain component operation.
The simplest localization hypothesis is a direct localization. It analyzes the system into a set of components, each responsible for a specific capacity. Quite often the direct localization is based on the observed behavior of the system. The infamous localization hypotheses provided by Franz Joseph Gall and other phrenologists were of this kind, as are theories that posit various 'centers' in the brain. As Bechtel and Richardson point out, direct localization does not provide much explanatory understanding. In essence, it locates an underlying system within a complex system. If the localization is successful, it tells what produces the effect, not how it produces it. At the level of the subsystem we face the original explanatory challenge again (Bechtel & Richardson 1993:63-72).
Talk of 'centers' is still common in popularizations of neuroscientific research (e.g. Camerer, Loewenstein & Prelec 2005), but direct localization is not the essence of the heuristic of decomposition and localization. The real localization hypotheses are cases of complex localization. A complex localization decomposes the systemic tasks into subtasks, the subtasks into component operations and finally localizes each of them into distinct components of the system. The idea is that the systemic processes are decomposed until one reaches a level where the localization hypotheses are finally justified. Until then, the failure of localization tells the researcher that either her decomposition has been done wrong or that she has not decomposed enough. The challenge is to find the right level of functional analysis that can be fitted with appropriate structural decomposition (Bechtel & Richardson 1993:125-148).
It is important to understand that failed localizations are not a reason to abandon the research strategy of localization, as critics might suggest. In fact, they have a crucial role in this research strategy. The idea is to start with simple hypotheses and make them more complex as failures accumulate. When a simple initial hypothesis fails, one can try to learn from the failure. This gives a focus to the research. By asking why the localization fails, the researcher can learn a great deal about the organization of the system. For this reason a failure of localization might be more informative than its success. This is not possible if one starts with a complex hypothesis. There are many ways in which a complex hypothesis can fail, so figuring out how it fails is much more difficult (Bechtel 2002:235).
Neuroscientific research is often portrayed as having reductionist ambitions. This is another misunderstanding. The project of understanding cognition can only succeed if different disciplines work together. This interdisciplinary mix of research is the complete opposite of the usual picture of reductionism. The heuristic of decomposition and localization does not attempt to reduce everything to its elementary parts and to eliminate disciplines working at higher levels of organization (e.g. psychology or cognitive science). To the contrary, the idea is to get disciplines working at different levels of organization to work together. The explanatory tasks of cognitive neuroscience are genuinely multi-level. The researchers must simultaneously employ evidence from different levels of organization.
In order to have a clear picture of the capacities to be explained, one needs experimental and observational psychology. In order to decompose these capacities into their component operations, one needs psychological and computational theories. Without them, neuroscientific research that attempts to do structural decomposition would be completely blind: it would not know what to localize. The neuroscientist faces the task of finding the right decomposition of the system, but this cannot be done without a clear account of the explanatory task. Without it, she would be at a loss with respect to the relevance of various brain parts and activities. In cognitive neuroscience, a detailed psychological theory is essential.
On the other hand, psychological theories cannot do it by themselves. Contrary to the opinions presented by some cognitive scientists, the details of implementation do matter. In principle there are scores of different ways to accomplish the same (computational) tasks. Behavioral studies can be used to rule out many of them, but many would still remain. The bottom-up constraints provided by the neurosciences will be crucial in sorting out the rest. The psychological disciplines doing functional analysis basically face the problem of underdetermination: many different mechanisms can, in principle, produce the phenomenon to be explained. Here, the neurosciences can be helpful; neuroscientific evidence can be used to constrain the search space by using it to rule out some alternative functional decompositions. It can also inspire new hypotheses about the functional organization. Developmental and evolutionary considerations have a similar role. They can both rule out some hypotheses and provide suggestions for alternative ways to conceive the functional organization. Only by employing evidence from all possible sources can the cognitive neuroscience project succeed. A successful complex localization is a multi-level and disciplined affair.
It is useful to take a look at things that complex localization requires and what it does not require. First, it does not require one continuous anatomically defined brain region as in direct localization. In fact, a dispersed localization might make research easier by allowing localization of the subtasks. Second, it does not require the anatomically defined brain region to be exclusively dedicated to one function. To the contrary, various different processes can utilize the same subtask realized by the region. Finally, it does not require that processing is linear. Linear processing is just easier to analyze: parallel processing and top-down processes increase the complexity of interactions. However, localization does presuppose that the system has a modular organization in which the components can be subjected to separate study. This means that the intrinsic functions of the components should be intelligible in isolation. Of course, as the complexity of the system increases and its parts become more integrated, the role of the organization of tasks and components becomes more important. This means that the system becomes less and less decomposable.
The continual failure of localization can lead to a rethinking of the phenomenon (Bechtel & Richardson 1993:173-194). The system to be analyzed is identified on the basis of common sense and earlier theories. There is nothing sacred in these earlier accounts. They might need some reconsideration. If the boundaries or the activities are misidentified, the failure of localization is understandable, and the situation can be corrected by reconsidering the system. The new identification of the relevant system and its activities can revitalize the search for the right kind of complex localization. Cummins, Poitier, and Roth (2004:319) provide a clear example of this kind of reconsideration:
'The visual system' may not be a system at all, but a motley crew of various autonomous systems (modules), the vast majority of which are not involved in the construction of a conscious 3-D representation but in quick-and-dirty perception-action loops, or 'intentional arcs' [...] these visual modules are not components of a visual system since they all work more or less independently, making the whole an aggregate more than a system as such.
A more radical reply to a continual failure of localization is to give up the whole idea of modular organization. Clearly the evidence points towards the idea that the spatial, temporal, and functional organization of brain processes are important in cognition. However, this is not enough to justify the idea that the brain is a holistic equipotential system that cannot be decomposed. The first problem with this idea is that we have enough evidence that the extreme form of this hypothesis is wrong. The second problem is that we really do not have any idea how to study systems like this. We can describe how they behave, but we would have great difficulties in understanding how and why they behave like they do. Our best hope is still to stick with the heuristic of decomposition and localization and to see how far it will take us. As documented by Bechtel and Richardson, it has proved to be a flexible research strategy in biological research that has accommodated many of the challenges posed by increasing complexity. We learn more about complexity by the failures of our relatively simple hypotheses than by worshipping the mysteries of complexity. [1]
[1] I am grateful for Jaakko Kuorikoski and Anna-Mari Rusanen for their comments.
Bechtel, William. 2002. "Decomposing the mind-brain: A long-term pursuit". Brain and Mind 3: 229-242. doi:10.1023/A:1019980423053
Bechtel, William & Adele Abrahamsen. 2005. "Explanation: A mechanist alternative". Studies in History and Philosophy of Biological and Biomedical Sciences 36: 421-441. doi:10.1016/j.shpsc.2005.03.010
Bechtel, William & Robert C. Richardson. 1993. Discovering Complexity. Decomposition and Localization as Strategies in Scientific Research. Princeton: Princeton University Press. http://mechanism.ucsd.edu/~bill/discoveringcomplexity.html
Camerer, Colin, George Loewenstein & Drazen Prelec. 2005. "Neuroeconomics: How neuroscience can inform economics". Journal of Economic Literature XLIII (March 2005): 9-64. doi:10.1257/0022051053737843
Coltheart, Max. 2006. "What has functional neuroimaging told us about the mind (so far)?". Cortex 42: 323-331. doi:10.1016/S0010-9452(08)70358-7
Cummins, Robert, Pierre Poitier & Martin Roth. 2004. "Epistemological strata and the rules of right reason". Synthese 141: 287-331. doi:10.1023/B:SYNT.0000044992.91717.aa
Henson, Richard. 2005. "What can functional neuroimaging tell the experimental psychologist?". The Quarterly Journal of Experimental Psychology 58A(2): 193-233. doi:10.1080/02724980443000502
Mundale, Jennifer. 2002. "Concepts of localization: Balkanization in the brain". Brain and Mind 3: 313-330. doi:10.1023/A:1022912227833
Uttal, William R. 2001. The New Phrenology. The Limits of Localizing Cognitive Processes in the Brain. Cambridge, Mass.: The MIT Press.
|