Reductionism: Difference between revisions
imported>Peter J. King (cat.) |
imported>Aleksander Stos (cz live for our original stuff, I guess) |
||
Line 7: | Line 7: | ||
[[Category:Philosophy Workgroup]] | [[Category:Philosophy Workgroup]] | ||
[[Category:CZ Live]] |
Revision as of 13:15, 20 July 2007
For much of the 20th century, the dominant approach to science has been reductionism – the attempt to explain phenomena by basic laws of physics and chemistry. This principle has ancient roots - Francis Bacon (1561-1626) quotes Aristotle as declaring "That the nature of everything is best seen in his smallest portions." [1] In many fields, however, reductionist explanations are impractical, and all explanations involve 'high level' concepts. Nevertheless, the reductionist belief has been that the role of science is to progressively explain high level concepts by concepts closer and closer to the basic physics and chemistry. For example, to explain the behaviour of individuals we might refer to motivational states such as hunger. These reflect features of brain activity that are still poorly understood, but we can investigate, for example, the 'hunger centres' of the brain that house these drives. These centres involve many neural networks – interconnected nerve cells, and each network we can be probed in detail. These networks in turn are composed of specialised neurons that can be analysed individually. These nerve cells have properties that are the product of a genetic program that is activated in development – and so are reducible to molecular biology. However, while behaviour is thus in principle reducible to basic elements, explaining the behaviour of an individual in terms of the most basic elements has little predictive value, because the uncertainties in our understanding are too great.
Measurement
The reductionist approach assigned particular importance to measurement of quantities; quantification of observations makes them accurately and objectively verifiable, and quantitative predictions are more readily testable than purely qualitative predictions. For some things there is a natural scale by which they can be measured, but for many, measurement scales are human constructs. For example the IQ scale used to purportedly measure intelligence in fact measures how well an individual performs on certain standardised tests, and how such performance relates to cognitive ability is open to debate. Nevertheless, such measurements are objectively repeatable. Measurements may be tabulated, graphed, or mapped, and statistical analysed; often these representations of the data use tools and conventions that are at a given time, accepted and understood by scientists within a given field. Measurements may need specialized instruments such as thermometers, microscopes, or voltmeters, whose properties and limitations are familiar within the field, and scientific progress is often intimately tied to their development. Measurements also provide operational definitions: a scientific quantity is defined precisely by how it is measured, in terms that enable other scientists to reproduce the measurements. Scientific quantities are often characterized by units of measure which can be described in terms of conventional physical units. Ultimately, this may involve internationally agreed ‘standards’; for example, one second is defined as exactly 9,192,631,770 oscillations or cycles of the cesium atom's resonant frequency [1]. The scientific definition of a term sometimes differs substantially from their natural language use; mass and weight overlap in meaning in common use, but have different meanings in physics. All measurements are accompanied by the possibility of error, so their uncertainty is often estimated by repeating measurements, and seeing by how much these differ. Counts of things, such as the number of people in a nation at a given time, may also have an uncertainty: counts may represent only a sample, with an uncertainty that depends upon the sampling method and the size of the sample.