MARKO VOJINOVIĆ HOME PAGE
My picture MARKO VOJINOVIĆ HOME PAGE
MENU
My picture MARKO VOJINOVIĆ HOME PAGE


M
E
N
U

My picture

Welcome

Disclaimer

The "serious" stuff:  

Physics and science  
Philosophy
Religion
Society

The "fun" stuff:

Computers
Music
Dancing
Science fiction

Guru Meditation

Contact me

Reductionism, emergence, and burden of proof (part I)



(Originally posted on Scientia Salon)

14. January 2015.

Introduction

Every now and then, the question of reductionism is raised in philosophy of science: whether or not various sciences can be theoretically reduced to lower-level sciences. The answer to this question can have far-reaching consequences for our understanding of science both as a human activity and as our vehicle to gain knowledge about reality. From both ontological and epistemological perspectives, the crucial question is: are all real-world phenomena that we can observe "ultimately explainable" in terms of fundamental physics? What one typically imagines is something like a tower of science, where each high-level discipline can to be reduced to a lower-level one: economics to sociology to psychology to neurology to biology to biochemistry to chemistry to molecular physics to fundamental physics. Is such a chain of reductions possible, or desirable, or necessary, or important, or obvious, or tautological, or implicit in our very concept of science?

The opposing ideas of reductionism and emergence lie at the core of these questions. The first thing to do, then, is to clear up what is actually meant by the ideas of reductionism and emergence in science. Given that fundamental physics is usually located at the bottom of any proposed chain of reduction, it somehow has a privileged position --- not only for being the "most fundamental", but because its mathematical rigor can be employed to make the meaning of reductionism and emergence more clear. The purpose of this article is to shed some light on these subject matters from the perspective of theoretical physics, hopefully answering some of the above questions. The essay is split in two parts [1]: the first one mainly deals with epistemological reductionism, while the second one tackles ontological reductionism.

Preliminaries

Let me start with some definitions. One of the crucial concepts for reductionism is that of "theory", as reductionism will be understood here as a particular relation between two theories. For the purpose of this article, I will define the notion of a theory in a rather loose descriptive fashion --- as a set of mathematical equations over certain quantities, whose solutions are in quantitative agreement with the experimentally observable phenomena which the theory aims at describing, to some given degree of precision, in some specified domain of applicability. This is a reasonable, generic description of the kind of theories that we typically deal with in physics.

There are several important points to note about this definition. First, if the solutions of a theory are not in quantitative agreement with experiment, the theory is considered wrong and should be either discarded or modified so that it does fit the experiment. Second, the requirements of mathematically rigorous formulation and quantitative (as opposed to qualitative) agreement with experimental data might appear too restrictive --- indeed, our definition rules out everything but physics and certain parts of chemistry and biology. For example, the theory of evolution is not really a theory according to such a definition (although its population genetics rendition is). Nevertheless, there is a very important reason for both of these requirements, which will be one of the main points of this essay, discussed in the next section. Finally, the phrase "a set of mathematical equations" is a loose description with a number of underlying assumptions. I will mostly appeal to a reader's intuition regarding this, although I will provide a few comments on the axiomatic structure of a theory in part II of the essay.

In order to introduce reductionism, let us consider two scientific theories, and a relation of "being reducible to" between those theories. In order to simplify the terminology, the "high-level" theory will be called the effective theory, while the "low-level" one will be called the structure theory. These names stem from the general notion that every physical system is constructed out of some "pieces" --- so while the effective theory describes the laws governing some system, the structure theory describes the laws governing only one "piece" of the system. Of course, if each such piece can be divided into even smaller pieces, the structure theory can in turn be viewed as effective, and it may have its own corresponding structure theory, thus establishing a chain of theories, based on the size and type of phenomena that they describe. This chain always has a bottom --- a theory which does not have a corresponding structure theory, to the best of our current knowledge. I will call that theory fundamental. Note that this definition of a fundamental theory is, obviously, epistemological [2].

It is important to point out one particular relationship between an effective theory and some corresponding structure theory: given that the physical system (described by the effective theory) consists of pieces (each of which is described by the structure theory), it follows that the domain of applicability of the effective theory is a subset of the domain of the structure theory. That is, as much as the effective theory can be applied to the system, the structure theory can also be applied to the same system --- simply by applying it to every piece in turn (of course, taking into account the interactions among the pieces). Putting it otherwise, the domain of applicability of a structure theory is usually a superset of the domain of applicability of the effective theory. Thus, the structure theory is said to be more general than the effective theory.

Finally, we are ready to define the relation of "being reducible to" between the effective and the structure theory. The effective theory is said to be reducible to the structure theory if one can prove that all solutions of the effective theory are also approximate solutions of the structure theory, in a certain consistent sense of the approximation, and given a vocabulary that translates all quantities of the effective theory into quantities of the structure theory.

The procedure for establishing reductionism, then, goes as follows. First, the effective and structure theories are often expressed in terms of conceptually different quantities (i.e., variables). Therefore one needs to establish a consistent vocabulary that translates every variable of the effective theory into some combination of structure theory variables, in order to be able to compare the two theories. As an example, think "temperature" in thermodynamics (effective theory) versus "average kinetic energy" in kinetic theory of gases (structure theory). Second, once the vocabulary has been established, one needs to specify certain parameters in the structure theory so that a particular solution of the latter can be expanded into an asymptotic series over those parameters. If the effective theory is to be reducible to the structure theory, such parameters must exist, and they often do --- they typically are the ratios between the quantities of the large system and the quantities of each of its pieces. Finally, once the asymptotic parameters have been identified and the solution of the structure theory expanded into the corresponding series, the dominant term in this series must coincide with the solution of the effective theory, and so on for all quantities and all possible solutions of the effective theory, always using the same vocabulary and the same set of asymptotic parameters [3].

If the above procedure is successful, one says that the effective theory is reducible to the structure theory, that phenomena described by the effective theory are explained (as opposed to being re-described) by the structure theory, and that these phenomena are weakly emergent from the structure theory. Conversely, if the above procedure fails for some subset of solutions, one says that the effective theory is not reducible to the structure theory, that phenomena described by those solutions are not explainable by the structure theory, and that these phenomena are strongly emergent with respect to the structure theory. In the next section I will provide examples of both situations.

Examples

Arguably the most well-known example of reductionism is the reduction of fluid mechanics to Newtonian mechanics [4]. As an effective theory, fluid mechanics is a nonrelativistic field theory, whose basic variables are the mass density and the velocity fields of the fluid, along with the pressure and stress fields that act on the fluid. The equations that define the theory are a set of partial differential equations that involve all those fields. As a structure theory, Newtonian mechanics deals with positions and momenta of a set of particles, along with forces that act on each of them. The reduction of fluid mechanics to Newtonian mechanics then follows the procedure outlined in the previous section. We consider the fluid as a collection of a large number of "pieces" where each piece consists of some number of molecules of the fluid, contained in some "elementary" volume. We establish a vocabulary roughly as follows: the mass density field is the ratio of the mass and the volume of each piece, at the position of that piece, in the limit where the volume of the piece is much smaller than the typical scale of the motion of the fluid. The ratio of the two sizes is a small parameter, convenient for the asymptotic expansion. Also, the velocity field is the velocity of each piece of the fluid, at the position of that piece, in the same limit. Similarly, the pressure and stresses are described in terms of average forces acting on every particular piece of the fluid. Finally, we apply the Newtonian laws of mechanics for each piece, and expand them into an asymptotic series in the small parameter. The dominant terms of this expansion can be cast into the form of partial differential equations for the fluid, which can then be compared to the equations of fluid mechanics. It turns out that the two sets of equations are equivalent, which means that fluid mechanics (effective theory) is reducible to Newtonian mechanics (structure theory).

In this sense, the motion of a fluid is described by fluid mechanics, explained by Newtonian mechanics, and all properties of a fluid are weakly emergent from the laws of Newtonian mechanics. Of course, all this works only for phenomena for which the approximation scheme holds.

There are many other similar examples, such as the reduction of the first law of thermodynamics to statistical mechanics, reduction of Maxwell electrodynamics to the Standard Model of elementary particles, reduction of quantum mechanics to quantum field theory, or reduction of Newton's law of gravity to general relativity. The essence here is that in each case the former can be reconstructed as a specific approximation of the latter.

In contrast to the above, the situations in which reductionism fails are much more interesting. In fact, these are nothing less than spectacular, since they often point to new discoveries in science. For the purpose of this article, I will focus on three pedagogical examples, called the dark matter problem, the Solar neutrino problem and the arrow of time problem. Each of these examples illustrates a different way in which reductionism can (and does) fail. Of course, other examples can be found as well, but the analysis of these three should be sufficient for subsequent discussion.

The first example is the failure to reduce the Standard Model of cosmology (SMC) as the effective theory, to the Standard Model of elementary particles (SMEP) as the corresponding structure theory. Aside from the fact that SMEP does not describe any gravitational phenomena that SMC contains, SMC describes the presence of so-called dark matter, in addition to the usual matter. The presence of dark matter particles cannot be accounted for by any of the matter particles in SMEP. Therefore SMC cannot be reduced to SMEP already at the qualitative level. In order to make SMC reducible to some structure theory, SMEP needs to be modified (in a non-obvious way) in order to account for dark matter particles. In other words, the mere presence of dark matter in cosmology requires us to rewrite the fundamental laws of physics. Here SMEP is considered fundamental because we do not yet have any structure theory for SMEP [5].

According to the terminology defined in the previous section, then, in this example the evolution and properties of the Universe (at large scales) are described by SMC, are not explainable by SMEP, and the existence of dark matter is strongly emergent.

The second example of failure of reductionism is even more interesting. The effective theory that describes our Sun, sometimes called the Standard Solar Model (SSM), also fails to reduce to SMEP. As far as we know, the Sun is composed of ordinary particles that SMEP successfully describes. So both SSM and SMEP can be used to describe the Sun, and qualitatively they in fact do agree. Moreover, they also agree quantitatively, but for a simple factor of three in one of the observables: the fusion process in the core of the Sun generates an outgoing flux of neutrinos, some of which reach the Earth and are successfully measured; all else being equal, the measured flux of neutrinos (as described by SSM) is roughly three times smaller than the flux predicted by SMEP. In the beginning, physicists looked at various ways to account for this discrepancy (essentially by checking and re-checking the error bars of everything involved in both SSM and SMEP), but the discrepancy persisted, and became known as the Solar neutrino problem. Over time, it became increasingly obvious that the Solar neutrino problem is nontrivial, and eventually all mathematical possibilities to reduce SSM to SMEP were exhausted. This generated a whole lot more interest, and subsequent experiments finally showed that the neutrino sector of SMEP needs to be modified (again in a non-obvious way) in order to account for that factor of three. So again, the missing factor of three in one of the observables of one effective theory required us to rewrite the fundamental laws of physics.

According to the adopted terminology, in this example the properties of the Sun are described by SSM, are not explainable by SMEP, and the amount of neutrino flux is strongly emergent.

There is a very important difference between the above two examples that needs to be emphasized. While SMC is not reducible to SMEP already at the qualitative level, SSM and SMEP do agree qualitatively, but not quantitatively. There is an important lesson to be learned here: qualitative agreement between the effective and structure theory is not enough for reductionism. The consequences of this are rather grave, and I will discuss them in the next section.

Finally, the third example of failure of reductionism is what is popularly called the arrow of time problem. It is essentially equivalent to the statement that thermodynamics (as the effective theory) cannot be reduced to any time-symmetric structure theory, nor to SMEP. The second law of thermodynamics implies that the entropy of an isolated system cannot decrease in time, which means that thermodynamics has a preferred time direction, and is not time-reversible. Moreover, the amount of this irreversibility is copious: every physical system with a large number of particles displays time-irreversible behavior. This property makes thermodynamics automatically non-reducible to any time-symmetric structure theory, due to something called Loschmidt's paradox. As for SMEP, its equations are not completely time-symmetric --- technically (pardon the jargon), K-mesons violate CP symmetry, which implies that they also violate the T symmetry due to the exactness of the combined CPT symmetry. However, the amount of time-irreversibility in K-meson processes is extremely small, and nowhere near enough to quantitatively account for the irreversibility of thermodynamics. Moreover, the particles that we most often discuss in thermodynamics (protons, neutrons and electrons) are not the ones that violate time-symmetry in SMEP, so the incompatibility is actually qualitative. Finally, in order to be able to reduce thermodynamics to a viable structure theory, we need to rewrite the fundamental laws of physics, again in a completely non-obvious way.

As in previous examples, we say that the entropy increase law is described by thermodynamics, is not explainable by any time-symmetric structure theory nor SMEP, and that the consequent "arrow of time" is strongly emergent.

The lesson to be learned from this example is that "complexity" can be a source of strongly emergent phenomena. Despite the fact that every particle in a gas can be described by, say, Newtonian mechanics, the gas as a collective displays behavior that is special, and explicitly not a consequence of the laws of Newtonian mechanics. And going further down to SMEP does not help either. The complexity can be manifested via a large number of particles, or because of strong/nonlinear/nonlocal interactions between them. As the level of complexity of a physical system increases, "more" becomes "qualitatively different" and stops being more of the same, so to speak.

Analysis

The examples discussed in the previous section guide us towards a set of criteria one must meet in order to establish reductionism between two theories. In particular, one must have:

If any of these desiderata is missing, one cannot sensibly talk of reductionism as defined at the beginning.

Perhaps the main point here is the necessity of quantitative formulation of a theory, and the example of Solar neutrinos is a sharp reminder that qualitative analysis may not be good enough. In order to stress this more vividly, let us consider a highly speculative thought experiment.

Imagine that we have managed to construct some hypothetical fundamental theory of elementary particles (one that is more fundamental and better than SMEP). Moreover, suppose that we have also managed to establish reductionism, in the above rigorous quantitative sense, of all physics, chemistry, etc. to this fundamental elementary particle theory. Reductionism all the way up to neurochemistry. Further, suppose that we have even constructed some effective quantitative theory of consciousness, that describes well all relevant observations. The natural idea is to reduce that effective theory of consciousness to the structure theory of neurochemistry, and consequently to our fundamental theory of elementary particles. Suppose that we attempt to do that, and find that neurochemistry, when applied to a human brain, qualitatively predicts all aspects of our effective theory of consciousness, but that there is a missing factor of two somewhere in the quantitative comparison. For example, suppose that the structure theory predicts a certain minimum total number of synapses in a brain in order for it to manifest consciousness. However, the effective theory tells us that consciousness can appear with half as many synapses. All else being equal (and rigorous), this single observation of the number of synapses in a conscious brain would falsify our theory of elementary particle physics!

As ludicrous as this scenario might seem to be, there are no a priori guarantees that it will not actually happen. In fact, a similar scenario has already happened, in the case of the Solar neutrino problem. It should then be clear that nothing short of rigorous quantitative agreement between two theories could ever be enough to establish reductionism.

There are many places in physics (let alone chemistry and other sciences) where this quantitative agreement has not yet been established, some of those places being pretty fundamental. For example, the mass of the proton has not yet been calculated ab initio from SMEP [6]. Now imagine that someone gets a flash of inspiration and finds a way to calculate it. What will happen if this value turns out to be two times as big as the experimentally measured proton mass? It would mean that the periodic table of elements, the whole of chemistry, and numerous other things are not reducible to SMEP. While nobody in the physics community believes that this is likely to happen, the actual proof is still missing. Thus, strongly emergent phenomena may lurk literally anywhere. Another example of a phenomenon that resists attempts at reductionism is high-temperature superconductivity. The jury is still out, but it might yet turn out to be a strongly emergent phenomenon, due to the complexity of the physical systems under consideration, in analogy to the strong emergence of the arrow of time.

Regarding the analysis of the examples of the previous section, one more point needs to be raised. From the epistemological point of view, whenever we are faced with the failure of reductionism, we can try to modify the structure theory in order to make the effective theory reducible. This approach is reminiscent of the idea of parsimony --- do not assume any additional fundamental laws until you are forced to introduce them. However, the three examples above of reductionist failure are a sharp reminder of the level of rigor necessary to claim that we are not forced to introduce a new fundamental law when faced with a complicated phenomenon. This means that we must be weary of applying parsimony too charitably, and opens the question of where does the burden of proof actually lie: is it on the person claiming that an emergent phenomenon is strongly emergent, or on the person claiming it is merely weakly emergent? All discussion so far points to the latter, in spite of the "parsimonious intuition" common to many scientists and philosophers. To this end, I often like to point to the following illustrative quote [7]:

"When playing with the Occam's razor, one must make sure not to get cut."

My conclusion regarding the burden of proof seems obvious from the examples discussed so far. Nevertheless, it is certainly instructive to discuss it from a more formal point of view, detailing the axiomatic structure of theories and the logic of establishing reductionism. Part II of this essay is devoted to this analysis, as well as to the issue of ontological reductionism.

_____

[1] The article is split in two parts mainly due to size constraints and to facilitate overall readability. However, the two parts should be considered an organic unit, since the arguments given in one are fundamentally intertwined with the arguments given in the other.

[2] The definition of a fundamental theory is epistemological since we may yet discover that the most elementary "pieces" we currently know of can be described in terms of even smaller entities, and thus give rise to another structure theory. From the ontological perspective, the existence of a fundamental theory is dubious, since there is always a logical possibility that the "most elementary" particles do not exist. There are also other issues regarding an ontologically fundamental theory, and I will discuss some of them in part II of the essay.

[3] Any typical effective theory has infinitely many solutions, and we cannot efficiently establish reductionism by comparing solutions one by one. Instead, this is done in practice by comparing the actual defining equations of two theories. Namely, one uses the equations of the structure theory, the vocabulary and the set of asymptotic parameters to "derive" all equations of the effective theory through a single consistent approximation procedure. This ensures that all solutions of the effective equations are simultaneously the approximate solutions of the structure equations, in the given approximation regime.

[4] It is usually the first example of reductionism that an undergraduate student in physics gets to learn about in a typical university course.

[5] There are many speculative proposals for such a structure theory, but so far none of them can be considered experimentally successful.

[6] And before someone starts to complain --- no, really, it has not been calculated, despite what you might read in the popular literature about it. If you dig deep enough into the actual research papers, you will find that the only thing that was established in the numerical simulations of lattice QCD is the ratio of proton mass to the masses of other hadrons. These ratios are in good agreement with experimental data, but the masses are all determined up to an overall unknown multiplicative constant, which cancels when one calculates the mass ratios. And this constant has not yet been calculated from the theory. For further information, read the "mass gap in Yang-Mills theories", one of the Clay Institute Millennium Problems.

[7] I always fail to find an appropriate reference for this statement. If anyone has any, please let me know!

_____

Marko Vojinovic holds a PhD in theoretical physics (with a dissertation on general relativity) from the University of Belgrade, Serbia. He is currently a postdoc with the Group of Mathematical Physics at the University of Lisbon, Portugal, though his home institution is the Institute of Physics at the University of Belgrade, where he is a member of the Group for Gravitation, Particles and Fields.