Skip to content

Science without Validation in a World without Meaning

Physicist Richard Feynman had the following advice for those interested in science: “So I hope you can accept Nature as She is—absurd.”1 Here Feynman captures in stark terms the most basic insight of modern science: nature is not understandable in terms of ordinary physical concepts and is, therefore, absurd.

The unintelligibility of nature has huge consequences when it comes to determining the validity of a scientific theory. On this question, Feynman also had a concise answer: “It is whether or not the theory gives predictions that agree with experiment. It is not a question of whether a theory is philosophically delightful, or easy to understand, or perfectly reasonable from the point of view of common sense.”2 So put reasonableness and common sense aside when judging a scientific theory. Put your conceptual models and visualizations away. They might help you formulate a theory, or they might not. They might help to explain a theory, or they might obfuscate it. But they cannot validate it, nor can they give it meaning.

Erwin Schrödinger made a similar critique of the simplified models widely used to explain scientific concepts in terms of everyday experience, such as those used to illustrate atomic theory:

A completely satisfactory model of this type is not only practically inaccessible, but not even thinkable. Or, to be more precise, we can, of course, think it, but however we think it, it is wrong; not perhaps quite as meaningless as a “triangular circle,” but much more so than a “winged lion.”3

“Do the electrons really exist on these orbits within the atom?” Schrödinger asks rhetorically. His answer: “A decisive No, unless we prefer to say that the putting of the question itself has absolutely no meaning.”4

Feynman and Schrödinger were concerned about the extremely small scale, but what about the extremely large scale? A single human cell has more than twenty thousand genes. Therefore, assuming one protein per gene, the number of different non-modified proteins exceeds twenty thousand. Add to that the many more different proteins resulting from alternative splicing, single nucleotide polymorphisms, and posttranslational modification. No conceptual model is conceivable for the interactions among all of these genes and proteins, or for even a tiny portion of them, when one considers the complex biochemistry involved in regulation. What is the meaning of the intricate and massive pathway models generated by computer algorithms? Is this even a meaningful question to ask? And the human body contains on average an estimated thirty-seven trillion cells!

Yet science has had great success dealing with the unthinkable and inconceivable. Hannah Arendt puts the matter succinctly: “Man can do, and successfully do, what he cannot comprehend and cannot express in everyday human language.”5 We have mathematically sophisticated scientific theories and daily operate with advanced engineering systems that are physically incomprehensible and whose principles cannot be communicated in everyday language. In Kantian terms, we are not limited by human categories of understanding.

This radical disconnect between scientific theory and everyday human understanding became impossible to ignore in the twentieth century. During that time, grappling with the issue of internal model randomness, as exemplified by quantum theory in physics, brought this problem to the fore.

Today, scientists are grappling with the problem of model uncertainty, as seen in areas like climate and medicine. These questions are increasingly challenging the basis of modern scientific knowledge itself, which is defined by a combination of mathematics and observation. Modern scientific knowledge, while rejecting commonsense conceptual models, has always depended upon mathematically expressed theories that could be validated by prediction and observation. But this approach is now under pressure from multiple sides, suggesting a deep crisis of scientific epistemology that has not been fully confronted. At the same time, political leaders find themselves increasingly impotent when faced with scientific issues. As we move further into the twenty-first century, humankind is presented with an existential paradox: man’s destiny is irrevocably tied to science, and yet knowledge of nature increasingly lies not only outside ordinary language but also outside the foundational epistemology of science itself.

Scientific Knowledge: Mind and Phenomena

Scientific knowledge can be defined in terms of a duality between mathematics (mind) and observation (phenomena). More precisely, it both requires and provides a specifically defined link between mind and phenomena.

Four conditions must be satisfied to have a valid scientific theory: (1) There is a mathematical model expressing the theory. (2) Precise relationships, known as “operational definitions,” are specified between terms in the theory and measurements of corresponding physical events. (3) There are validating data: there is a set of future quantitative predictions derived from the theory and measurements of corresponding physical events. (4) There is a statistical analysis that supports acceptance of the theory, that is, supports the concordance of the predictions with the physical measurements—including the mathematical theory justifying the application of the statistical methods.

The theory must be expressed in mathematics because science involves relations between measurable quantities and mathematics concerns such relations. There must also be precise relationships specified between a theory and corresponding observations; otherwise, the theory would not be rigorously connected to physical phenomena. Third, observations must confirm predictions made from the theory. Lastly, owing to randomness, concordance of theory and observation must be characterized statistically.

The meaning of a scientific theory lies in the connection between the mathematics and experience, and that connection occurs via the process of validation. The knowledge is functional and its meaning lies in its predictive capacity. Mathematics divorced from experience is simply a mental construct. It exists independently of any physical context. On the other hand, past experience, in and of itself, does not provide knowledge projecting into the future. Hans Reichenbach puts the matter as follows:

If the abstract relations are general truths, they hold not only for the observations made, but also for the observations not yet made; they include not only an account of past experiences, but also predictions of future experiences. That is the addition which reason makes to knowledge. Observation informs us about the past and the present, reason foretells the future.6

Nature is unintelligible; nevertheless, science requires that we be able to make accurate predictions of future events based on mathematical models in the mind. The crux of the problem is to connect the intelligibility of mathematics with the unintelligibility of nature. It would be naïve to believe that this problem has a simple solution or even one that is completely satisfactory for all human endeavors.

The epistemology consisting of a mathematical-observational duality was born in the seventeenth century. According to historian Morris Kline,

What science has done, then, is to sacrifice physical intelligibility for the sake of mathematical description and mathematical prediction. . . . The insurgent seventeenth century . . . bequeathed a mathematical, quantitative world that subsumed under its mathematical laws the concreteness of the physical world. . . . Our mental constructions have outrun our intuitive and sense perceptions.7

Kline, writing in the twentieth century, was looking back from a post–quantum theory world and saw the full import of Newton’s basic assumption: “For I here design only to give a mathematical notion of these forces, without considering their physical causes and seats.” As of 1687, scientific knowledge was constituted in mathematics; “hypotheses” such as causality were no longer part of science.

The full import was not seen at the time because, while the equations of classical physics did not explicitly require that their mathematical expressions directly correspond to physical terms in the human understanding (terms such as “particle” and “wave”), in fact they tended to have such correspondence. This all changed in the first half of the twentieth century when it was realized that so-called particles had wave properties and so-called waves had particle properties.

That a theory makes no sense to our ordinary understanding and cannot be expressed in ordinary language is irrelevant. In the Mysterious Universe, published in 1930, James Jeans wrote, “We need no longer discuss whether light consists of particles or waves; we know all there is to be known about it if we have found a mathematical formula which accurately describes its behavior, and we can think of it as either particles or waves according to our mood and the convenience of the moment.”8

The issue before us is not some rarified abstraction. Modern engineering applies mathematical analysis to scientific models in order to derive optimal ways in which to transform the physical systems represented by the models, such as the movement of traffic, transmission of power, finding an alloy for better stress tolerance, or filtering an image to reduce noise. If a model is not predictive, then the perfor-mance of the engineered system cannot be predicted, and therefore cannot be optimized. In Feynman’s words, it is not a question of whether a model is “perfectly reasonable from the point of view of common sense.” But unintelligibility of the physical system results in the unintelligibility of the engineering solution, which is a physical implementation of a(n intelligible) mathematical solution. The epistemology of science dictates the epistemology of engineering. Recall Arendt: “Man can do . . . what he cannot comprehend.”

Implications of Unintelligibility

Scientists have accepted the limitations of human understanding. They have formulated scientific knowledge strictly in terms of mathematics, and have laid the foundations of scientific “truth” in terms of the predictive capacity of their mathematical theories. But what are the implications of unthinkable, unintelligible, and unspeakable knowledge to a world whose political and economic decisions must inevitably be related to that knowledge, and whose everyday mode of intellect and communication lies within ordinary language? If our destiny is tied to knowledge that can be neither understood nor spoken outside of mathematical formalism, can human understanding and political dialogue helpfully shape that destiny?

In his book Einstein, History, and Other Passions, the historian and philosopher Gerald Holton makes the following comment on the relationship between the typical intellectual and the unintelligibility of modern science:

By having let the intellectuals remain in terrified ignorance of modern science, we have forced them into a position of tragic impotence; they are, as it were, blindfolded in a maze through which they feel they cannot traverse. They are caught between their irrepressible desire to understand the universe and, on the other hand, their clearly recognized inability to make any sense out of modern science.9

Thousands of decisions are made daily by politicians, managers, and bureaucrats that are either directly or indirectly related to science. These involve input from various sources, discussions among colleagues, and ultimately a decision relating to law, regulation, product development, research expenditures, medical treatment, or a host of other societal issues. If the meaning of the subject matter cannot be conceived in terms of ordinary categories of understanding, and cannot be expressed in ordinary human language, then how can leaders untrained in mathematics and scientific epistemology make informed decisions?

They must turn to advisers. A scientific adviser might provide a conceptual model—for instance, presentation slides with figures intended to provide some sort of physical intuition regarding the issue at hand. This might be useful if the decision regards the accuracy of a cannonball: a cartoon showing two cannons firing balls that land within different radii of the target. But it is completely unsatisfactory in the commonplace situation in which the system involves hundreds or thousands of variables connected by myriad relations dynamically changing over time. Here scientific knowledge means understanding the highly complex mathematical system being used to model the phenomena. Any simplification of that is bound to give one an erroneous impression of the scientific knowledge.

In this context—unless he has received an education that includes a strong emphasis on scientific epistemology, and the mathematical training to understand that epistemology—a political leader is impo-tent. He must decide on an action based upon a rudimentary, and necessarily erroneous, appreciation of the issue at hand. Practically speaking, a leader need not know the mathematical particulars of a theory, but he must understand the validation process: what predictions are derived from the theory and to what extent have those predictions agreed with observations?

This is not to argue that leadership be confined to scientists and engineers, only that education include serious scientific, mathematical, and statistical courses. Certainly, one cannot expect good political leadership from someone ignorant of political philosophy, history, or economics, or from someone lacking the political skill to work productively amid differing opinions. The basic point is that good decision-making in a technical civilization requires fundamental knowledge of scientific epistemology.

This challenge—never fully resolved—has been present since at least the twentieth century. But new and growing challenges are now on the horizon.

The Limitations of Knowledge

Scientific models can generally be broken down into two types: deterministic and stochastic (random). With a deterministic model, given an initial state of the system, over time the system will, in principle, evolve into a unique state. For a stochastic system, by contrast, it can evolve into a number of different states, with its evolution described in terms of probability. This difference has significant implications for validation.

To validate a deterministic model, one can align the model and experiment with various initial states and check to see if predictions and observations agree. There might be some experimental variation, but in principle this can be reduced arbitrarily and slight disagreements ignored.

The situation with stochastic models is completely different. For a single initial condition, there are many destination states and these are described via the model by a probability distribution giving the likelihoods of ending up in different states. An experiment consists of many observation trajectories from a single initial state and the construction of a histogram giving the distribution of the experimental outcomes relative to that state. Validation concerns the degree of agreement between the theoretical, model-derived probability distribution and the data-derived histogram. Acceptance or rejection of the theory depends on some statistical test measuring the agreement between the two curves—and here it should be recognized that there is no universally agreed upon test.

This situation is depicted in Figure 1, each part of which shows an initial state, five terminal states, the theoretical curve predicted by the model, and a histogram constructed by counting the results of a large number of trajectories from the initial state. Whether the theory is accepted or rejected, at least for this initial condition, depends on some statistical measure of the closeness of the model-based and data-based curves. In this particular figure, the histogram is close to the model distribution in part A, whereas in part B it is to the left of the model distribution. Nevertheless, the decision to accept or reject will be based on a statistical test, which itself will depend on the number of experimental trajectories. Note that this figure is only for a single initial condition. Moreover, whereas the figure is one-dimensional, in practice its dimension depends on the size of the system, which can be large.

Complex physical settings often result in stochastic models simply because it is not feasible—experimentally, mathematically, or computationally—to model the full system. The effects of the latent (left-out) variables result in random model behavior because their behavior affects the model but is not incorporated into the model.

Consider climate modeling. The climate is always changing. The scientific question is whether climate change can be predicted. Climate models can at best capture a part of the behavior of the physical system, and therefore are necessarily stochastic. Recalling Figure 1, this means that validation involves comparison between predicted distributions and data histograms. Construction of a histogram requires many independent observations of model behavior, which in the case of climate would involve many observations of the earth’s behavior over many years. Even if we were only concerned with a single initial state, say the one in which we are living today, how would we obtain hundreds of fifty-year observations for a fifty-year prediction? And even a single observation is problematic because it would require aligning the initial conditions of the model with those of the current earth and solar system—an extremely difficult task.

In a 2007 paper published in the Philosophical Transactions of the Royal Society, climate scientists Claudia Tebaldi and Reto Knutti state the fundamental problem:

The predictive skill of a model is usually measured by comparing the predicted outcome with the observed one. Note that any forecast produced in the form of a confidence interval, or as a probability distribution, cannot be verified or disproved by a single observation or realization since there is always a non-zero probability for a single realization to be within or outside the forecast range just by chance. Skill and reliability are assessed by repeatedly comparing many independent realizations of the true system with the model predictions through some metric that quantifies agreement between model forecasts and observations (e.g. rank histograms). For projections of future climate change over decades and longer, there is no verification period, and in a strict sense there will never be any, even if we wait for a century. The reason is that the emission scenario assumed as a boundary condition is very likely not followed in detail, so the observations from the single climate realizations will never be fully compatible with the boundary conditions and scenario assumptions made by the models. And even if the scenario were to be followed, waiting decades for a single verification dataset is clearly not an effective verification strategy. This might sound obvious, but it is important to note that climate projections, decades or longer in the future by definition, cannot be validated directly through observed changes. Our confidence in climate models must therefore come from other sources.10

The last sentence opens a window on our dilemma: we would like to use conceptually appealing models lacking scientific validity, but this is a high-risk game. Think of the days when the earth was believed to be at the center of the solar system. Ptolemy’s heliocentric theory fit the data of his day quite well, and it certainly seemed plausible to human beings looking up at the sky.

Tebaldi and Knutti give ample warning in the case of climate models:

Most models agree reasonably well with observations of the present-day mean climate and simulate a realistic warming over the twentieth century (of course, the specific performance de-pends on each model/metric combination), yet their predictions diverge substantially for the twenty-first century, even when forced with the same boundary conditions.11

Arguments about whether or not a prediction made thirty years ago validates or invalidates a model are meaningless. Models are stochastic and predictions take the form of probability distributions. Single predictions lack scientific meaning. This is the import of Tebaldi and Knutti’s comment that “any forecast produced in the form of a confidence interval, or as a probability distribution, cannot be verified or disproved by a single observation or realization.” Here we are at the nub of the problem: the meaning of a scientific theory lies in the process of its validation, because this is what connects the mental construct to the physical world. It is fallacious to reject or accept a climate model based on a single-trajectory prediction.

There can be no doubt that climate change is an issue of vital importance to humanity. Yet rather than soberly confront the issue in the context of scientific limitations, there is nothing but childish invective. How many politicians appreciate the epistemological conundrum? How many have advisers who understand?

Medicine and Engineering

Medicine is a form of engineering in which the physician takes some action (drug, surgery, etc.) to alter the behavior of a physical system, the patient. Its underlying science is biology. Hence our ability to characterize medical knowledge depends on our ability to represent and validate biological knowledge.

At the cellular level, biology concerns the operation of the cell in its pursuit of life, not simply the molecular infrastructure that forms the physiochemical underpinnings of life. The activity of a cell is like that of a factory, where machines manufacture products, energy is consumed, information is stored, information is processed, decisions are made, and signals are sent to maintain proper factory organization and operation.12 Once a factory exceeds a very small number of interconnected components, coordinating its operations goes beyond a commonsense, nonmathematical approach. Cells have massive numbers of interconnected components.

Analogous to a factory’s constituent parts (electrical, mechanical, and chemical), a cell has physical-chemical constituent parts. A factory is not constituted by its constituent parts; rather, its regulatory (operational) logic defines the factory as an operational system whose purpose is to consume energy, maintain operations, and produce an output. Similarly, a cell’s regulatory logic defines it as an operational system whose purpose is to consume energy, maintain its life, participate in the life of the organism, and propagate. For both factory and cell, the regulatory logic determines the relations between the physical structures within the system and between the system and its environment.

The extraordinary complexity of biological knowledge is a direct consequence of the complexity of cellular regulatory logic, the intra-cell operational organization of molecular structures, and the inter-cell organization. Owing to this complexity, biological systems are beyond everyday intelligibility and intuition. Consequently, conceptual models are bound to differ substantially from actual cellular function and only mathematics can provide knowledge representation. Moreover, model construction and validation require intricate experiments and sophisticated statistics. The general framework will be formed within the theory of stochastic multivariate dynamical processes. Validation involves operational predictions derived from the mathematical regulatory model.

This means that modern medical knowledge must be fully embedded within mathematics. Complex regulation can be understood in no other way. Moreover, efficient knowledge acquisition must be guided by rigorous experimental design. There is a universe of potential knowledge in a single cell, but only a portion of cell architecture significantly relates to any particular disease. Experimentation should be guided by the objective of patient care—not loosely, but with the aim of acquiring knowledge that most affects the ability to mathematically derive optimal treatment for a particular disorder, such as regulatory failure indicated by abnormal gene activity. This can only be accomplished by research teams in which there are excellent experimentalists working hand in hand with excellent mathematical engineers trained in the modeling and control of stochastic systems.

Norbert Wiener, the father of modern engineering, recognized the need for this kind of teamwork. In 1948, he wrote,

It is these boundary regions of science which offer the richest opportunities to the qualified investigator. They are at the same time the most refractory to the accepted techniques of mass attack and the division of labor. If the difficulty of a physiological problem is mathematical in essence, ten physiologists ignorant of mathematics will get precisely as far as one physiologist ignorant of mathematics, and no further. If a physiologist who knows no mathematics works together with a mathematician who knows no physiology, the one will be unable to state his problem in terms that the other can manipulate, and the second will be unable to put the answers in any form that the first can understand.13

Wiener’s point is apparent to anyone familiar with the epistemology of science and the manner in which modern engineering depends on scientific knowledge. Unfortunately, our political leaders, administrators, and the bureaucrats who control the financial resources for medical research often appear to be oblivious to the demands of science and engineering.

Future Possibilities

While stochastic models present us with numerous difficulties, an even more perplexing conundrum faces contemporary scientists and engineers who wish to model highly complex systems involving hundreds or thousands of variables and model parameters. Owing to their sheer number, many model parameters cannot be accurately estimated via experiment and are left uncertain. As a consequence of uncertainty, for each different set of possible values for the unknown parameters, there is a different model—possibly an infinite number of models.

A related complexity problem arises owing to computational limitations. Even our most powerful computers are impotent once we get beyond fairly simple physical systems. This means that different people include different variables in their models for the same phenomenon, so we end up having many models. Each will have a justification, but they will yield different predictions. Because they are partial, it is unlikely that any will yield validating predictions. Once again, we are faced with model uncertainty.

Uncertain models are sometimes averaged to provide a description of the phenomena; however, it is difficult to determine how this averaging should be done and difficult to quantify the level of uncertainty. This can all be done in what is known as a “Bayesian” framework, which provides mathematical rigor while incorporating uncertainty; yet that does not overcome the fact that the resulting average model does not model the actual physical situation, but rather serves as an approximation to the physical relations occurring across all models.

In engineering, given a model, optimal Bayesian methods provide the best solution and a quantification of uncertainty. There is a cost to this approach, however, because the solution is not optimal for the actual physical system but only on average across the possible physical systems. Ignorance has a price. This kind of approach is continuing to receive attention owing to the fact that so many systems of importance, including climate and medicine, can involve uncertain modeling. Validation in the usual sense is impossible because the overall modeling includes many individual models, and observations will come from the actual physical system, not those associated with possible models arising out of uncertainty. If this all sounds a bit arcane, that is because it is. Moreover, it requires substantial background in mathematical statistics to understand the formalism.

The problem of the twentieth century was internal model randomness, as exemplified by quantum theory in physics. The problem of the twenty-first century is model uncertainty, which occurs due to our desire to study and apply engineering to complex systems across virtually all domains—a desire motivated by high-performance computing but, ironically, severely limited by our inability to compute.

Confronting the problems of complexity, validation, and model uncertainty, I have previously identified four options for moving ahead: (1) dispense with modeling complex systems that cannot be validated; (2) model complex systems and pretend they are validated; (3) model complex systems, admit that the models are not validated, use them pragmatically where possible, and be extremely cautious when interpreting them; (4) strive to develop a new and perhaps weaker scientific epistemology.14

The first option would entail not dealing with key problems facing humanity, and the second, which seems popular, at least implicitly, is a road to mindless and potentially dangerous tinkering. Option three is risky because it requires operating in the context of scientific ignorance; but used conservatively with serious thought, it may allow us to deal with critical problems. Moreover, option three may facilitate productive thinking in the direction of option four, a new epistemology that maintains a rigorous formal relationship between theory and phenomena.

Can we achieve a satisfactory new epistemology? We will know when it is achieved, but we should not expect a grand breakthrough anytime soon. These kinds of things move very slowly, if at all. Humanity waited two thousand years following Aristotle for the arrival of modern science in the seventeenth century. But today we are surely grateful to Bacon, Galileo, and Newton for the benefits received. On the more optimistic side, it took less than 250 years to move from Newton to Einstein and Schrödinger.

The Desire for Truth

Science is beset by two opposing anti-scientific epistemological forces. One aims to explain phenomena in terms that are intuitively understandable—for instance, causality or grand verbal explanations. This is Aristotelian science: explanation with no differentiation between science and metaphysics, in which there are no operational definitions connecting theory to experience. On the other side, there is radical empiricism, which collects data (experience) and fits mathematical models to the data, again without a connecting theory. In some sense, both opposing forces agree: there need not be an epistemology connecting mental constructions to physical measurements. Both represent a return to pre-seventeenth-century thinking.

These two anti-scientific approaches require less effort than science. In neither case does theory have to be supported by prediction. In the first case, the theory is a construct of the intellect and stops there. In the second, the theory is simply a mathematical construct that fits existing data, without any tie to the underlying physical processes, in the sense that we can say nothing rigorous about the relation between the mathematical construct and future observations.15 From an epistemological standpoint, neither provides knowledge or has any meaning. The fundamental part of scientific research—the generation of knowledge—is abandoned.

Modern science is grounded in an epistemology that provides a framework for knowledge in the context of human experience. In the words of Niels Bohr, “In our description of nature the purpose is not to disclose the real essence of the phenomena but only to track down, so far as it is possible, relations between the manifold aspects of our experience.”16 Science works because we function in the context of our experience, and science characterizes our experience in a functional way. To know is to be able to predict future experience, and to do so in a precise way. Truth, in a scientific sense, exists within the scientific epistemology, which provides the rules of the game.

In Revolt of the Masses, published in 1930, José Ortega y Gasset wrote,

Whoever wishes to have ideas must first prepare himself to desire truth and to accept the rules of the game imposed by it. It is no use speaking of ideas when there is no acceptance of a higher authority to regulate them, a series of standards to which it is possible to appeal in a discussion. . . . What I affirm is that there is no culture where there are no standards.17

Einstein, who was a kind of pantheist who identified God with the laws of nature, went so far as to say, “Science can be created only by those who are thoroughly imbued with the aspiration toward truth and understanding. This source of feeling, however, springs from the sphere of religion.”18 We may not be able to attain truth in this sense, but there needs to be an aspiration toward truth, as defined within a humanly meaningful epistemology. This, Einstein argued, is rooted in religion. The salient point is that Einstein possessed a drive to know, and that drive came from a sense of the sacred deep within.

Whether or not the desire to know arises from a religious instinct, the pursuit of scientific knowledge requires profound desire. One might engage in philosophic discussion concerning the existence of God or truth, or even of the existence of body and mind, as did David Hume. But when one is buried in strenuous meditation on science, one ceases to be a nihilist and becomes a person desirous of knowledge—and knowledge that has a truth value according to an epistemology that supplies standards.

Hume, the great skeptic, had some down-to-earth advice for those sophisticates who claim the self-contradictory axiom that all is relative, and who postulate a logical concept of truth, and then think that there is something profound in that it cannot be proven a posteriori. In his Treatise on Human Nature, Hume rejected this perspective with his usual wit:

I dine, I play backgammon, I converse, and am merry with my friends; and when, after three or four hours’ amusement, I would return to these speculations, they appear so cold, and strained, and ridiculous, that I cannot find in my heart to enter into them any further. . . . The skeptic must assent to the principle concerning the existence of body, though he cannot pretend, by any argument of philosophy, to maintain its veracity.19

In showing that strict empiricism makes it impossible to demonstrate the existence of mind, Hume did the world a great favor. He knows his mind and body exist. He knows it every time he makes merry with his friends. But he does not know it as Plato would have him know it, as if knowing existence were like knowing a theorem of geometry. Reading Hume is a delight because he puts to bed the notion of certainty outside of logic and mathematics. He recognizes that functional knowledge involves expectation, not certainty. Nihilism rejects metaphysical certainty and takes nothingness as its negation. What a bizarre (or should we say, childish) notion!

Decades after this “postmodern” relativism came to be celebrated, one might have expected science to retreat from political discourse. But the opposite has occurred: today we are inundated with claims of “science” from every corner. Huge amounts of data are being gathered, and models of ever-increasing complexity are being fit to them, seemingly limited only by the ability to compute. The value of science as a credential—something used to give credence to one’s opinion—seems stronger than ever.

Beneath the surface, however, the ubiquity of science in political debate may be a symptom of its decline, of a loss of confidence and authority. The success of science during the three centuries since Newton’s Principia of 1687 is unrivaled by any intellectual endeavor in human history. But the epistemology behind those scientific achievements is being disregarded on a grand scale. Science as knowledge is increasingly ignored. Data mining looks like science to the extent that it involves mathematical functions and data. What is missing, however, is the connection between mathematics and verifiable prediction—between mind and phenomena—that gives the whole thing meaning. But if meaning is denied a priori, then why care about science?

This article originally appeared in American Affairs Volume IV, Number 2 (Summer 2020):  90–106.

Notes

1 Richard P. Feynman, QED: The Strange Theory of Light and Matter (Princeton: Princeton University Press, 1985), 10.

2 Feynman.

3 Erwin Schrödinger, Science and Humanism: Physics in Our Time (Cambridge: Cambridge University Press, 1951), 25.

4 Erwin Schrödinger, Science and the Human Temperament, trans. James Murphy (London: Allen, 1935), 124.

5 Hannah Arendt, Between Past and Future (New York: Penguin, 2006), 264.

6 Hans Reichenbach, The Rise of Scientific Philosophy (Berkeley: University of California Press, 1963), 80.

7 Morris Kline, Mathematics and the Search for Knowledge (New York: Oxford University Press, 1985), 122–23, 146.

8 James Jeans, The Mysterious Universe (Cambridge: Cambridge University Press, 1931), 131.

9 Gerald Holton, Einstein, History, and Other Passions: The Rebellion against Science at the End of the Twentieth Century (Cambridge: Harvard University Press, 1996), 55.

10 Claudia Tebaldi and Reto Knutti, “The Use of the Multi-Model Ensemble in Probabilistic Climate Projections,” Philosophical Transactions of the Royal Society A 365, no. 1857 (June 14, 2007): 2065.

11 Tebaldi and Knutti, 2066.

12 Edward R. Dougherty, Epistemology of the Cell: A Systems Perspective on Biological Knowledge (Hoboken, N.J.: Wiley, 2011).

13 Norbert Wiener, Cybernetics; or, Control and Communication in the Animal and the Machine, 2nd ed. (Cambridge: MIT Press, 1961), 2–3.

14 Peter V. Coveney, Edward R. Dougherty, and Roger R. Highfield, “Big Data Need Big Theory Too,” Philosophical Transactions of the Royal Society A 374, no. 2080 (November 13, 2016).

15 Edward R. Dougherty, The Evolution of Scientific Knowledge from Certainty to Uncertainty (Bellingham, Wash.: SPIE, 2016).

16 Niels Bohr, Atomic Theory and the Description of Nature (Cambridge: Cambridge University Press, 1961), 18.

17 José Ortega y Gasset, The Revolt of the Masses (New York: Norton, 1932), 71–72.

18 Albert Einstein, Ideas and Opinions (New York: Three Rivers, 1954), 46.

19 David Hume, Treatise of Human Nature, vol. 1 of The Philosophical Works, 4 vols. (Boston: Little, Brown, 1854), 331, 237.


Sorry, PDF downloads are available
to subscribers only.

Subscribe

Already subscribed?
Sign In With Your AAJ Account | Sign In with Blink