Let us now discuss the question of geography's scientific status in earnest, adding substance to the sketch provided above. For the term 'science' has not just been a rhetorical weapon used by geographers; it has also described specific ways of investigating reality. It is these ways I want now to explain in this and the next two sections before moving on to an assessment of geography's scientific credentials.
I suggested above that the first substantive attempt to make geography a science was a mid-twentieth-century affair. It was arguably inaugurated by one of Richard Hartshorne's sternest critics, Fredrick Schaefer - at least symbolically. The Nature of Geography defined the discipline as the study of 'areal differentiation'. For Hartshorne, geography was a synthesizing or idiographic discipline. Unlike the 'systematic' subjects (such as chemistry), geography looked at how multiple human and physical phenomena came together at the earth's surface. Hartshorne thus favoured the established idea that geography was the study of regional difference.
However, it was not attractive to all geographers. Schaefer, originally an economist, was a German émigré based in the geography department at the University of Iowa. In Europe he had been heavily influenced by the Vienna Circle, a group of philosophers, linguistic theorists and mathematicians dedicated to spelling out the exact nature of scientific inquiry. In 1953 Schaefer published an essay in a leading professional geography journal (the Annals of the Association of American Geographers) entitled 'Exceptionalism in geography'. The exceptionalism he was critical of was the idea, expressed by Harsthorne, that geography was unlike the specialist sciences because it studied unique phenomena (exceptions to rules). Schaefer insisted that the world is not a mosaic of specific regions with little in common. Rather, he maintained that careful observation would reveal that human and physical phenomena were organized into regular spatial patterns. This meant that geography could be a nomothetic or law-seeking discipline, just like many physical sciences were. Its role would be to discover the 'morphological laws' governing different geographical phenomena (e.g. river systems or people's choice of where to buy a house). Laws are regular associations, modes of behaviour or patterns that are relatively invariant and which apply to all the phenomena they describe. They can be deterministic or probabilistic. They are, in the physical sciences at least, usually valid across time and between places and regions. For Schaefer, where the natural and social sciences discover 'process laws' (like that describing the temperature-pressure relationship), geography's role would be to discover the spatial patterning of the visible phenomena those process laws lay behind (morphological laws). And, because such discovery could only proceed on the basis of meticulous observation of numerous instances of these visible phenomena, it followed that for Schaefer geographers would have to become specialists (geomorphologists, economic geographers, hydrol-ogists, etc.) rather than the generalists that Hartshorne so admired.
The precise influence of Schaefer's paper on post-war geographers is unclear, but what is certain is that others soon followed his lead knowingly or otherwise. As a division between (and divisions within) human and physical geography began to solidify, geographers pursued the common goal of describing and explaining spatial patterns. Chorley and Haggett (1967: 20) expressed this view succinctly: 'that there is more order in the world than appears at first sight is not apparent until the order is looked for'. But was the search for geographical order via specialization the only thing that made post-war geography more 'scientific' than pre-war geography? I mentioned in the previous section that the post-war 'spatial scientists' (as they became known) were committed to the scientific world-view, the scientific method, and the use of quantitative and statistical methods - a trinity that is sometimes called positivism. So let me now explain each of these. The scientific world-view is a set of precepts or principles that define the general nature of science. It was the French philosopher, Auguste Comte, who first codified this world-view in the early nineteenth century. Comte was writing at a time when dogmatism, superstition, mysticism and royal diktat still governed much of people's worldly knowledge. For Comte, science should possess five characteristics: le réel, le certitude, le précis, l'utile and le relative (Habermas, 1972). The first meant that scientific knowledge was based on direct experience and observation of reality; the second meant that this observation and experience should be replicable so that all scientists could test its accuracy; le précis meant that all scientific statements about reality should be formally testable; l'utile meant that scientific knowledge should be practically useful because it was based on a correct understanding of how the material world functions; finally, le relative meant that scientific knowledge was unfinished, progressing by continual testing and exploration of new topics. In sum, scientific knowledge would be objective (or value-free), universal, exact, useful and ever-expanding. It would dispel illusions and liberate humankind through its commitment to the discovery of truth.
But how was this thing called science to be undertaken in practice? This is where the Vienna Circle of 'logical positivists' came in, whose work had influenced Schaefer. The Vienna Circle was (and still is) famously associated with explaining the 'proper' scientific method. Their understanding of this method was inspired, in part, by their observation of how the 'experimental' or laboratory sciences, like physics, operated. The Vienna Circle wanted the method to be common to all scientists so that different academic disciplines were distinguished not by how they studied but by what they studied. This method was based on the principle that if a statement or proposition about the world cannot be factually tested, it is meaningless and thus unscientific. The Vienna Circle called the method 'deductive-nomological' in order to distinguish it from the inductive or Baconian method (see Figure 4.1). The latter, implausibly, assumes that scientists observe the material world with no preconceptions and then make inferences based on a limited number of observations that are applied to a much wider set of similar but not observed phenomena (so-called 'extended inference'). Against this, the deductive-nomological method takes the following form. Scientists, equipped with a set of hunches, observe the portion of reality that interests them. They then form an impression of both what exists and why and how it comes to be the way it is. The why and how impressions (explanatory concerns) are then codified into an initial law, model or theory. I defined a law earlier. In basic terms, a model is a simplified representation of reality that aims to depict the key causal variables at work (or the 'signals in the noise'). A theory is a more sophisticated and detailed attempt to offer a rational explanation of reality and comprises a set of consistent, logical statements that would account for the existence of the 'what' (a descriptive concern). In time, models can become theories and theories laws, but this is not to imply that laws are somehow the highest form of scientific knowledge. The initial laws, models and theories that scientists devise are then used to generate empirically testable hypotheses. In turn, these hypotheses are tested by using appropriate methods to gather relevant data. These data are then analysed - again, using appropriate methods -in order to determine whether the laws, models and theories initially proposed can logically and consistently explain that data. If, after a good deal of data have been gathered and analysed, the laws, models and theories are found wanting, then they are either rejected or else modified until they are accurate. Eventually, after repeated verification (i.e. a persistent search for data that show the modified laws, models and theories to be true), the Vienna School believed one could arrive at explanations and, indeed, predictions of the following form:
E (Past, present or future event/s)
Here, a set of empirical events can necessarily be described, explained and predicted from a set of well-confirmed laws, theories or models coupled with factual information about the local conditions prevailing at the site where the explanation or prediction applies. For instance, if a hydrologist has a set of general laws about soil porosity and water throughflow, plus information about the local soil type and its antecedent moisture content, s/he might be able to both explain and predict why and whether overland flow occurs during a particular rainstorm as opposed to sub-surface flow. Some years after the Vienna Circle disbanded, the philosopher of science, Karl Popper amended its account of the deductive-nomological approach by arguing that science is more rigorous and efficient if it is based on
Inductive route Deductive-nomological route
falsification rather than verification. That is, he argued that scientists should look for evidence that disproves their laws, theories and models. His logic was that one refutation will lead scientists to reject or amend a proposed explanation whereas even a thousand verifications only tell us that the explanation has not failed as yet. Popper's 'critical rationalism' (as it has been called) was thus a speedier, more exacting route to scientific truths - though perhaps too exacting for most practising scientists.
So much for scientific method. I mentioned above that specific techniques of both data gathering and data analysis were a key part of this method. It would be futile trying to list all the techniques that scientists have used in their research over the years. Suffice to say that many of these techniques rest upon very precise measurement and equally precise analysis. The appeal of quantitative methods (as opposed to qualitative ones) is that they offer this kind of precision. For instance, there's a difference between observing that water boils when it gets hot and knowing that it normally boils at 100 degrees Celsius at normal atmospheric pressure.
Having discussed this scientific trinity - world-view, method and quantification - it would seem logical to examine the kind of research the spatial scientists did in the 1960s and 1970s and see whether the practice matched the ideas laid down by Comte and the Vienna Circle. In fact things are not so simple. No one, not even Schaefer, laid out a detailed template at the start of geography's 'scientific and quantitative revolution' for others to follow (in fact he died before the 1953 paper was in print). For instance, Schaefer's paper said nothing about scientific method, while it took until 1969 for someone (David Harvey) to spell out the deductive-nomological route to explanation and prediction. Meanwhile, none of the leading spatial scientists had much to say about Comte's positivist world-view - it was very much a 'hidden philosophy'. Yet, as we shall see in the next section, critics of spatial science in the 1970s and in more recent text-book treatment have given the impression that things were otherwise - that spatial scientists had a 'grand plan' and a worked-out conception of science from the very start. The reality is that they operated in a more piecemeal fashion with little formal understanding of positivist world-view or method.
In light of this, what we can say about the spatial scientists? We can't say that they all adopted every last detail of the scientific trinity outlined above. We can say, though, that these geographers tried vigorously to describe and explain a variety of spatial patterns at different scales; that they did so using formal laws, theories and models (the titles of Bunge's book and of Haggett and Chorley's second volume openly advertised the fact); that they thought of themselves as relatively objective seekers after geographical truths; and that they enthusiastically employed quantification in their research. In human geography, for example, the 1960s were the era when Christaller's central place theory, Weber's industrial location theory, Alonso and Muth's urban land use theories and Von
Thunen's agricultural land use theory (among many others) were tested and refined. Human geographers hunted for static and dynamic geographical patterns (such as urban hierarchies and the spatial diffusion of innovations respectively). And they did so using statistics (descriptive and inferential), as well as a number of other numerical measures and procedures. The question is, was this kind of research scientific? And if it wasn't, why not?
Was this article helpful?