‘Science’ is derived from Latin scientia, a complicated word whose apparently simple meaning, ‘knowledge’ conceals an origin in the idea of finding out about things by subdividing them and sorting the pieces. But to call science ‘knowledge’, with the implication of certainty, is an idea long past its prime: modern science is far more a form of enquiry into natural phenomena, a consensus of information held at any one time (all of which is, so to speak, on loan, and may be modified by new discoveries and new interpretations at any moment), and, most loosely of all, a community of people engaged in enquiry into the natural world.
The word ‘science’ was not used, even in these senses, until the mid-19th century. Until then, it simply meant the study or skill of any field of knowledge at all: there was a ‘science’ of playing the viol, another of baking bread, another of parliamentary government, and so on. ‘Scientists’ in our modern sense were at this time called ‘natural philosophers’ (as opposed to moral philosophers: see below). The first person to describe a ‘natural philosopher’ as a ‘scientist’ was William Whewell, in 1840, and the name has stuck. The separate ‘sciences’ of the time (that is, areas of enquiry into the natural world) were astronomy, biology, chemistry and so on. These were ordered, in the mid-20th century, into two large groupings, life sciences (concerned with the animate world) and physical sciences (concerned with the inanimate world). Technology is science applied to the practical world, and there is also a group of other disciplines which use scientific method to a greater or lesser extent and which are often called ‘sciences’, though some prefer the word ‘studies’: anthropology, psychology, the social sciences and others. Pseudosciences, as the name implies, are disciplines which claim (or once claimed) the same kind of rigour as the natural sciences, but which on investigation seem (at least to non-devotees) not to live up to those claims: astrology, parapsychology, spiritualism and so on.
The quest of science begins with the human attempt to come to terms with the universe. It takes a different path from such approaches as myth or religion. In these, the power in the universe, sometimes even the universe itself, is personalized, assumed to have character and identity (usually on the human model). With such a universe, with such powers, it is possible to form a relationship (again on the human model), but objective understanding is neither possible nor the point. If, by contrast, you treat the universe as non-arbitrary, non-controlled, as an impersonal machine, it is as pointless trying to form a relationship with it as it would be with a kettle or a wall-clock. Instead, understanding is a possible, even desirable, endeavour, on the assumption that the governing principles of the universe are not the whims or grand designs of gods and demons, but a number of unchanging, objective and discoverable rules. (The idea that the rules do not change, that the ‘laws’ of the universe are fixed and are the same everywhere, literally ‘universal’, is as fundamental to this approach as faith is to religion. It is, in modern science, undergoing some shocks.)
If the universe is assumed to be governed by gods and spirits, an accepted way to describe it is to explain natural phenomena (such as lightning flashes or the cycle of the seasons) with stories. Storms are caused by irritation among the heavenly powers; the Sun needs to be helped by humans (for example by worship or sacrifice) to return each morning from the world of dark to the world of light; whatever is or happens in the universe is there because some god or other personalized power decided that it should be there. Such ideas can be extremely simple (like the myths described above), and they can also act as the basis for elaborate, beautiful and satisfying systems of arts, belief, civilization and custom. If, by contrast, the universe is assumed to be impersonal, to be controlled by laws which can be discovered and understood, the method is to look at natural phenomena, find their points of similarity or difference (if any), ponder them objectively and reach logical conclusions about them. The quest is not organic, like that of a tree (as is the proliferation of myths and religious belief-systems); it is a progression, step by logical step—there may be side-paths and detours, but the progression continues nonetheless.
In such a model, it is necessary to start somewhere. Each journey, in the cant phrase, has to begin with the first step. This step cannot be assumed to lead from any other, or it could not be the first step. Therefore it must in some way be so obvious, so self-evident and simple, that it is impossible to disagree with it. The ancient Greeks, who (so far as is known) first started the systematic investigation of natural phenomena, called such steps ‘axioms’. If something is axiomatic, it does not need proof: it is itself the basis for a system of logical proof of other things. For example, the Greeks would have claimed as axiomatic that fire is hot and ice is cold, that rock is harder than water, that the shortest distance between two points is a straight line, that parallel lines never meet.
All such ideas depend on observation of Nature—and indeed it is possible to live one's life comfortably and securely simply by accepting them, without bothering about explanations or derivatives. There is a continuum of common-sense ‘knowledge’ about the natural world which is enough for most people. Just as one knows that to buy food one needs goods or money to exchange for it, without bothering to go into all the complexities of economic theory and fiscal management, so one knows, without needing to swallow a scientific textbook, that if water is heated it boils (and scalds), that clay left in the Sun hardens, that horse-manure used as a mulch for roses makes them grow better, and so on. The ‘scientific’ thinkers of most ancient societies, most notably the Greeks, left all such practicalities to others. They assumed that the logical systems which could be built on such axioms were of no particular interest in daily life, that they were a preserve of the mind alone, and therefore of specially trained or talented individuals who gave their attention to such matters—‘natural philosophy’—as others gave their attention to human character, belief or behaviour, extrapolating logical systems from axioms in exactly the same way, calling the resultant activity ‘moral philosophy’.
This method of enquiry is deduction. Its appeal is almost entirely intellectual, and its rigour and beauty led to its being regarded, in ancient times, as one of the highest achievements of human thought. Its beauty is the vastness and elegance of the structures which can be built on the original axioms: for example, if you take as axiomatic that the most perfect form of curve is a circle, and that the heavens (as opposed to the brute Earth) are perfect, then you deduce that the bodies visible in the heavens, and their movement, are patterned in circles. If you further assume that the Earth is stationary in the heavens (and it does not, apparently, move under one's feet except in earthquakes), then you assume that it is the still centre of the universe, and that the other heavenly bodies move round it—they can be perceived to move—in circles. Another elegance of the system, as an intellectual construct, is that it seems to be finite. When you reach the end of all deductions possible from a given axiom, however complex the system created, you stop: all knowledge in that particular area has been achieved. Euclid's book Elements is one of the most impressive achievements of the process. It takes the axioms of geometry, one by one, and deduces from each a logical edifice each part of which is directly traceable back to the original axiom, and seems to reach a logical conclusion, as if there were no more to say on the subject. To ‘know’ geometry, therefore, might simply be to ‘know’ Euclid's Elements: this is what intellectuals in the West believed for some 2,000 years. In all human history, few books except religious sacred writings have had such long-lasting and widespread authority.
The essential flaw in this kind of ‘science’ is that it assumes that the axioms are infallible. If this or that phenomenon in Nature, or the deduction made from it by this or that observer, fails to ‘fit’ the theory, then the theory is right, the natural phenomenon is an aberration and the observer has made a mistake. There is no need for continuous practical observation: deductive systems depend on thought alone. This view was accepted throughout the Western world, and led to the elevation of scholasticism as a form of intellectual exploration, and to the marginalization of observation and experiment, for some 2,000 years. Christian divines, notably St Thomas Aquinas in the 13th century, even sought to produce a whole system of belief and practice by deduction, a kind of ‘scientific’ Christianity. Others of the same period, for example Albertus Magnus, set out to synthetize and intellectualize all knowledge about the divine, human and natural worlds, to make all experience part of the same deductive system. Those who sought other ways of explaining the perceived phenomena of the universe, and tried to explore them by observations and experiment, were at best tolerated by the Christian establishment (Roger Bacon is a case in point), or were demonized, denounced and persecuted with a kind of mechanistic rigour (for example practitioners of alchemy, rational medicine and witchcraft). In the East, by contrast, natural philosophy and religion were kept apart, and practical observation was never excluded, so that ‘science’ was able to make progress, and to keep its relationship with the natural world (for example in heavenly observation and mathematics) in a way impossible in the West.
The second major problem with early investigation of the natural world was the absence of instruments of any kind more substantial than the plumb-line and abacus. Natural philosophers had no need of them, as their investigations were remote from the real world; in realms where practicality was essential (for example building-work and commerce), technology was sufficient for need, or if not, need was tempered to technology. The most important developments in ‘scientific’ thought, in the first millennium of the modern era, were all in the area of calculation (that is, mathematics excluding geometry, which had, in Elements, its own sacred text), and its comparatively advanced state was due entirely to need: making charts of ever-increasing complexity for such things as the inundations of the Nile, the mating-sequence of the Chinese Emperor and his various wives and concubines, or the commercial, financial transactions brought about by the burgeoning of international trade. There were no microscopes, no telescopes, no accurate timepieces, no tables or mechanical means of calculation; all observation was limited by the human eye, all manipulation by the skill of the human hand. The miracle is that despite this, astronomers in particular made huge strides in charting the heavens, at first in the Middle East, and then in Europe, where in 1543 Copernicus, on the basis of naked-eye observation and interminable calculation, suggested that one of the most fundamental axioms of the astronomy of the last 3,500 years was mistaken: the Sun, not the Earth, was the centre of the universe. (Lacking instruments, he had no concept of a universe beyond the solar system.)
When Brahe, a couple of decades later, showed that the planets' orbits were elliptical and not circular, and when Galileo, at the end of the 16th century, first used the newly-invented telescope to prove Copernicus correct and then, by rolling balls down ramps (not, as legend has it, by dropping balls from the top of the Leaning Tower of Pisa), shattered another of the major axioms of ancient Greek physics, that a given weight of some ‘heavy’ material (such as lead) falls to the ground faster than the same weight of some ‘light’ material (such as feathers), science, in the modern sense, at last was born. The basis now was that the universe was true, not the axioms, and therefore that if there was a conflict between observed fact and theory, the theory should be modified. The method became not deduction but induction: starting from observations and elaborating principles from them, and being prepared to fine-tune the principle each and every time a new observation challenged existing ideas.
It is impossible to overstate the importance of this 180-degree turn in our view of how to study the natural world. The rise of observation and experimentation had been prefigured in the previous few centuries—in the work of Roger Bacon, for example, or in Paracelsus' experiments with diagnosis and treatment in medicine—but now it became the central plank of all scientific work. Its rise coincided with other intellectual movements, notably Descartes' demonstration that it was possible to believe in a universe which was mechanistic rather than deterministic, the suggestion, by the humanists among others, that human beings and not God might be the ‘measure of all things’, and, not least, the feeling that it was not enough to accept the view of the world's geography propounded since ancient times, but that physical exploration, backed by maps based on observation rather than imagination, was possible. In 1605, Francis Bacon propounded what has become known as the ‘scientific method’, setting the pattern for inductive science just as Aristotle's methods had established deductive science some 2,000 years earlier. In the following decades Boyle and Hooke began a systematic exploration of chemistry, removing it far from its mumbo-jumbo image in alchemy; Newton began studying the properties of light and the movement of physical objects in the universe; Newton and (separately) Leibniz developed the calculus, the most sophisticated method of mathematical calculation yet devised; measuring devices, from barometers to chronometers, from mechanical calculators to marine compasses, were invented, refined, and turned to the purposes of scientific exploration as well as the ‘practical’ uses for which they were created.
From this moment on, at first in the West and then (from the 1900s onwards) in the East and eventually worldwide, science has grown at an exponential rate. At the beginning of the 19th century (the time of Priestley's work on oxygen and the first thorough-going experiments to harness electricity), a single individual could, with application, learn all available scientific knowledge. By the mid-century, each branch of science had grown so fast, and become so complex, that even experts had to rely on libraries, assistants and aides-mémoires even in their own fields. As investigation proceeded, instrument-making kept pace, often itself triggering new directions in research. There was another fundamental change, in place by the end of the 19th century (and signalled, perhaps, by the presentation of the first Nobel prizes). Instead of science being an esoteric and hermetic discipline, conducted by cult-like groups (such as the Pythagoreans in ancient Greece) or by intellectual gurus whose thoughts were considered to elevate them far beyond ordinary mortal understanding (Ptolemy in his day, and Newton in his, spring to mind) scientific knowledge came to be regarded as the property not of individuals but of the entire human race. Discoveries and research-methods were shared; investigation became the work of teams. This co-operative approach (not, it must be admitted, shared by all scientists: there were, and are, a few determined hermits) was matched by the rise of rationalism, the collapse of the consensus that God had anything to do with the creation, ordering or maintenance of the universe. We may or may not, nowadays, retain our religious beliefs, but for better or worse we live in a world which is perceived in scientific and secular-scientific terms.
There remain many problems. Scientific activity is perceived by many as dangerous, polluting and potentially disastrous. Business considerations have distorted some scientific areas (for example, dictating directions in drugs research or suggesting the patenting of plant or human genomes). Science has become esoteric again, so that there is often little mutual understanding between scientists from different disciplines, let alone between scientists and non-scientists. And most important of all, perhaps is the thought (alarming to many, both in and out of the ‘scientific community’) that there is, now, no such thing as ‘truth’ in science. Non-Euclidean geometry (discovered in the early 19th century), Quantum mechanics and the relativity theories of Einstein in the early years of the 20th century, not to mention current discoveries in every area from astrophysics to genetics, suggest that the best we have at any moment, the best we have ever had, is a temporary working model of the way things are, and that exploration, investigation, are endless. Once again, the earlier consensus about the universe and its laws has been turned on its head, and one of the main questions philosophers of science seek to answer is, if (as is now apparent) induction from observation, considered as a method of reaching logical ‘truth’, is as flawed as deduction, where does science go next? KMcL
See also chaos theory; philosophy of science; scientific laws; two cultures.
Further reading Colin A. Ronan, Science: its History and Development Among the World's Cultures; , K.R. Popper, The Logic of Scientific Discovery; , R. Harre, Philosophies of Science.
Related Credo Articles
noun the natural sciences, such as physics, astronomy, or chemistry, that deal primarily with nonliving materials, or any one of these:...
Physicists in the first half of the twentieth century completely revolutionized physics. The theory of relativity developed by Albert Einstein...
Classical physics has been superseded by quantum theory: quantum theory is verified by experiments. Experiments must be described in terms...