Theory (mathematics): Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Boris Tsirelson
imported>Boris Tsirelson
Line 165: Line 165:
*if "halt" is predicted, enter an infinite loop;
*if "halt" is predicted, enter an infinite loop;
*otherwise (if "run forever" is predicted), halt.
*otherwise (if "run forever" is predicted), halt.
We get a paradox, which means that the assumptions made in the beginning of this section cannot be satisfied. An axiomatic theory cannot be such that its theorems include all true statements about formal machines, and no wrong statements about them! This important conclusion is closely related to the famous Gödel theorem.<ref>The first incompleteness theorem of Gödel, somewhat stronger than this conclusion, states that a consistent and complete theory cannot contain Robinson arithmetics.</ref>
We get a paradox, which means that the assumptions made in the beginning of this section cannot be satisfied. An axiomatic theory cannot be such that its theorems include all true statements about formal machines, and no wrong statements about them! This important conclusion is closely related to the famous Gödel theorem.<ref>The first [[Incompleteness Theorem|incompleteness theorem]] of [[Kurt Gödel|Gödel]], somewhat stronger than this conclusion, states that a consistent and complete theory cannot contain Robinson arithmetics.</ref>


==Notes==
==Notes==
<references />
<references />

Revision as of 13:37, 12 June 2010

This article is developing and not approved.
Main Article
Discussion
Definition [?]
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

Axiomatic or non-axiomatic

There are two possible approaches to mathematics, called by R. Feynman "the Babylonian tradition" and "the Greek tradition". They answer differently the question, whether or not some mathematical facts are more fundamental than others, more consequential facts. The same approaches apply to any theory, mathematical or not.

The Babylonian (non-axiomatic) tradition treats a theory as a network whose nodes are facts and connections are derivations. If some facts are forgotten they probably can be derived from the others.

The Greek (axiomatic) tradition treats a theory as a tower of more consequential facts called theorems, grounded on the basis of more fundamental facts called axioms. If all theorems are forgotten they surely can be derived from the axioms. The axioms are sparse and simple, not to be forgotten.

Monotonic or non-monotonic

The distinction between these two approaches is closely related to the distinction between monotonic and non-monotonic logic. These answer differently the question, whether or not a fact can be retracted because of new evidence. Monotonic logic answers in the negative, non-monotonic logic answers in the affirmative.

Non-monotonic logic is used routinely in everyday life and research. An example: "being a bird, this animal should fly", but the bird may appear to be a penguin. Another example: "the grass is wet, therefore it rained", but the cause may appear to be a sprinkler.

The non-axiomatic approach is flexible. When needed, some old facts can be retracted, some new facts added, and some derivations changed accordingly. Nowadays this approach is widely used outside mathematics, and only marginally within mathematics (so-called informal mathematics).

The axiomatic approach is inflexible. A theorem cannot be retracted without removing (or replacing) at least one axiom, which usually has dramatic consequences for many other theorems. Nowadays this approach is widely used in mathematics.

The non-axiomatic approach is well suited when new evidence often comes from the outside. The axiomatic approach is well suited for a theory that advances only by extracting new consequences from an immutable list of axioms. It may seem that such a development must be dull. Surprisingly, this is an illusion. Being inflexible in some sense, an axiomatic theory can be very flexible in another sense (see below).

Computers metaphor

A conceptual metaphor helps to understand one conceptual domain in terms of another. For example, the desktop metaphor treats the monitor of a computer as if it is the user's desktop, which helps to a user not accustomed to the computer. Nowadays many are more accustomed to computers than to mathematics. Thus, analogies with computers may help to understand mathematics. Such analogies are widely used below.

Flexible or inflexible

In 1960s a computer was an electronic monster able to read from a punch tape simple instructions stipulated by the hardware and execute them quickly, thus performing a lot of boring calculations. Nowadays some parents complain that personal computers are too fascinating. However, without software a personal computer is only able to read (say, from a compact disk) and execute instructions stipulated by the hardware. These instructions are now as technical as before: simple arithmetical and logical operations, loops, conditional execution etc. A computer is dull, be it a monster of 1960s or a nice looking personal computer, unless programmers develop fascinating combinations (called programs) of these technical instructions.

For a programmer, the instruction set of a given computer is an immutable list. The programmer cannot add new elements to this list, nor modify existing elements. In this sense the instruction set is inflexible. New programs are only new combinations of the given elements. Does it mean that program development is dull? In no way! A good instruction set is universal. It means that a competent programmer feels pretty free to implement any well-understood algorithm provided that the time permits (which is usually most problematic) and the memory and the speed of the computer are high enough (which is usually less problematic). In this sense a good instruction set is very flexible.

Likewise a mathematician working within a given theory faces its axioms as an immutable list. However, this list can be universal in the sense that a competent mathematician feels pretty free to express any clear mathematical idea. In this sense some axiomatic systems are very flexible.

Universal or specialized

For a user, the software of a computer is first of all a collection of applications (games, web browsers, media players, word processors, image editors etc.) All applications function within an operating system (Windows, MacOS, Linux etc.) The operating system is a universal infrastructure behind various specialized applications. Each application deals with relevant files. The operating system maintains files in general, and catalogs (directories) containing files and possibly other catalogs.

Likewise, the set theory is a universal infrastructure behind various specialized mathematical theories (algebra, geometry, analysis etc.) Each specialized mathematical theory deals with relevant objects, relations and sets. The sets theory deals with sets in general, possibly containing other sets, and reduces objects and relations to sets.

Univalent or multivalent

Plane geometry (called also "planar geometry") is a part of solid geometry that restricts itself to a single plane ("the plane") treated as a geometric universe. The question "which plane?" is inappropriate, since planes do not differ in their geometric properties. Every two planes α, β are isomorphic, that is, there exists an isomorphism f between α and β. Treating α and β as sets of points one defines isomorphism as an invertible (one-to-one and onto) map f : α → β preserving all primitive relations. Namely: f maps lines into lines; the distance between f(A) and f(B) on β is equal to the distance between A and B on α; etc. The same is required of the inverse map .

Axioms of the plane Euclidean geometry leave no freedom, they determine uniquely all geometric properties of the plane. More exactly: all Euclidean planes are mutually isomorphic. In this sense we have "the" Euclidean plane. In terms of Bourbaki, the plane Euclidean geometry is an univalent theory. In contrast, axioms of a linear space (called also vector space) leave a freedom: a linear space may be one-dimensional, two-dimensional, three-dimensional, four-dimensional and so on (infinite dimension is also possible). The corresponding theory is multivalent.

According to Bourbaki, the study of multivalent theories is the most striking feature which distinguishes modern mathematics from classical mathematics.

A similar idea occurs in mathematical logic: a theory is called categorical if all its models are mutually isomorphic. However, for Bourbaki a theory is embedded into the set theory, while in logic a theory is standalone (embedded into the first-order logic).

Motivated or indiscriminate

Monkeys could type into a computer a sequence of hardware instructions; the computer could execute them; but the result of such "programming" has almost no chance to be fascinating or useful. Fascinating computer games reflect human predilections. Useful programs reflect human needs. A computer is dull for humans unless its software reflects human life in one way or another.

Likewise, a theorem is of no interest for humans unless it is motivated in one way or another by human life. The motivation may be quite indirect; many theorems "only" help to prove other theorems, many are appreciated "only" for their aesthetic value, etc. But some kind of motivation is necessary. Indiscriminate stream of logical consequences of the axioms is not publishable in the mathematical literature.

Note that "a theorem" does not mean "a motivated theorem", "an important theorem" etc., not even "an already discovered theorem". All theorems are just an indiscriminate stream of logical consequences of the axioms, including the axioms themselves.

Theorems of a theory are, by definition, statements that follow from the given axioms according to the given rules (called by different authors inference rules, derivation rules, deduction rules, transformation rules).

Rigor or intuition

Mathematics as a cultural process is not bound by monotonic logic. For example, it was treated till 1872 as intuitively evident fact that any curve on the plane has a tangent line everywhere except maybe a discrete set of points. However, this "fact" was refuted and retracted. "A century later, we have seen so many monsters of this sort that we have become a little blase. But the effect produced on the majority of nineteenth century mathematicians ranged from disgust to consternation." (Bourbaki, page 311)

Since then, intuition is not a valid argument in mathematical proofs. A mathematical theory is bound by monotonic logic. If needed, mathematics as a cultural process can retract an axiomatic theory as whole (rather than some theorems) and accept another axiomatic theory. (Moreover, in a remote perspective the axiomatic approach could be abandoned.)

Sharp or fuzzy; real or ideal

A fair coin is tossed 1000 times; can it happen at random that heads is obtained all the 1000 times? Emile Borel, a famous mathematician, answers in the negative, with an important reservation: "Such is the sort of event which, though its impossibility may not be rationally demonstrable, is, however, so unlikely that no sensible person will hesitate to declare it actually impossible." (Borel, page 9)

Why `may not be rationally demonstrable'? Why does this statement remain outside the list of Borel's theorems in probability theory?

Mathematical truth is sharp, not fuzzy. Every mathematical statement is assumed to be either true or false (even if no one is able to decide) rather than "basically true", "true for all practical purposes" etc. If n heads out of n times is an impossible event for n=1000 then there must exist the least such n (be it known or not). Say, 665 heads can occur, but 666 heads cannot. Then, assuming that 665 heads have just appeared, we see that the next time tails is guaranteed, which contradicts to the assumed memoryless behavior of the coin.

Another example. The number 4 can be written as a sum of units: 1+1+1+1. Can the number 21000 be written as a sum of units? Mathematics answers in the affirmative. Indeed, if you can write, say, 2665 as a sum of units, then you can do it twice (connecting the two copies by the plus sign). Complaints about limited resources, appropriate in the real world, are inappropriate in the imaginary, highly idealized mathematical universe.

Formal or formalizable

An example. If then , since . This simple algebraic argument can be checked by a human or even a computer. No need to understand the meaning of the statement; it is sufficient to know the relevant formal rules. The statement follows from the given rules only, rigorously, without explicit or implicit use of intuition. A formal proof guarantees that a theorem will never be retracted.

A universal formal language conceived by Gottfried Leibniz near 1685, "capable of expressing all humans thoughts unambiguously, of strengthening our power of deduction, of avoiding errors by a purely mechanical effort of attention" [Bourbaki, p.302] was partially implemented in mathematics two centuries later, especially, as a formal language of set theory. In the form introduced by Bourbaki this language contains four logical signs, three specific signs, and letters. Its rules are considerably more complicated than elementary algebra.

However, this formal language is never used by a typical mathematician. Likewise, most programmers never use the instructions of the computer hardware. Instead, they develop applications in appropriate programming languages (BASIC, Java, C++, Python etc). A statement of a programming language corresponds usually to many hardware instructions; the correspondence is established by special programs (compilers and interpreters). A programming language is geared to programmers, hardware instructions --- to hardware.

The gap between the language of mathematical texts (books and articles) and the formal language of the set theory is even wider than the gap between a programming language and hardware instructions. It is as wide as the gap between pseudocode and Turing machines.

A programmer, when coding a program, knows that the computer does exactly what it is told to do. In contrast, an author or speaker, when communicating with other people, can be ambiguous, make small errors, and still expect to be understood. Pseudocode is a description of an algorithm, intended for human reading, that can be converted into a code by any competent programmer. A code is formal; a pseudocode is formalizable.

Likewise, communication between mathematicians is made in a formalizable language. "In practice, the mathematician ... is content to bring the exposition to a point where his experience and mathematical flair tell him that translation into formal language would be no more than an exercise of patience (though doubtless a very tedious one)." (Bourbaki, page 8)

A mathematician, using a formalizable language, need not know all technical details of the underlying formal language. Many equivalent formal languages may be used interchangeably. Likewise, a single pseudocode can be used when coding in different programming languages, as long as these are not too far from each other.

Axiomatic or axiomatizable

Two finite lists of vectors generate (by summation) the same infinite set of vectors. Similarly, different finite lists of axioms can be used for generating the same infinite set of theorems.

A mathematician, working within an axiomatic theory, need not remember the list of axioms in full detail. It is sufficient to remember a list of theorems that contains all axioms or, more generally, such that each axiom follows easily from these theorems. (Here, as usual, "theorems" are treated inclusively, as "axioms and theorems".)

Two lists of axioms are called equivalent if every axiom of each list follows from the axioms of the other list. Such lists of axioms may be used interchangeably, since they generate the same theorems.

Formalizable in reality or in principle

The integer 1 is a very simple example of a mathematical object that can be defined in the language of set theory. Its definition can be translated from this formalizable language into any corresponding formal language, in particular, into the formal language introduced by Bourbaki. However, the length of the resulting formal definition is equal to 4,523,659,424,929. It would take about 4000 gigabytes of computer memory! More complicated mathematical definitions are much longer. Clearly, such formalization is possible only in principle, similarly to the possibility of writing 21000 as a sum of units.

This formal language is so verbose because of substitution. Here is an algebraic example: substituting into gives an expression of length 64, containing 8 occurrences of a. Further, substitute and you get an expression of length 152, containing 24 occurrences of u. And so on.

A formal language intended for practical use stipulates a kind of abbreviations. Especially, hardware instructions of any computer stipulate subroutine call (in some form). A subroutine of 300 instructions can be called 20 times from another subroutine of 300 instructions; thus, 6300 operations are performed by a program of only 600 instructions. Also the second subroutine can be called many times, and so on.

There exist programming languages intended for theoretical rather than practical use. A universal Turing machine gives an example. Such formal languages usually do not stipulate abbreviations. Accordingly, a practically useful algorithm translated into such formal language gives an exponentially long program, useful in principle but useless in practice.

There exist also formal mathematical languages intended for practical use, for example, Mizar. These stipulate abbreviations.

The formalizable (but not formal) language of mathematical texts stipulates several kinds of abbreviations described below.

Definitions

Mathematical definitions are very diverse.

A definition may be just an abbreviation local to a single calculation, like this: "denoting for convenience by we have...".

A single definition may embed a whole specialized mathematical theory into the universal set-theoretic framework, like this:

"A Euclidean space consists, by definition, of three sets, whose elements are called points, lines and planes respectively, and six relations, one called betweenness, three called containment and two called congruence, satisfying the following conditions: ..."

A definition may be given mostly for speaking more concisely, like this: "in a triangle, the altitude of a vertex is, by definition, its distance to the line through the other two vertices".

A definition may introduce a new revolutionary concept, like this: "the derivative of a function at a point is, by definition, the limit of the ratio..."

"Some mathematicians will tell you that the main aim of their research is to find the right definition, after which their whole area will be illuminated. ... For other mathematicians, the main purpose of definitions is to prove theorems..." [The Companion, pages 74-75]

Complete or incomplete

A theory is called complete, if every statement in its language can be proved or disproved. In other words: if such a statement is not a theorem then its negation is necessarily a theorem.

A statement (in the language of the given theory) is called independent (in other words, undecidable) in this theory, if it is not a theorem, but also its negation is not a theorem.

Independent statements appear naturally in multivalent theories. For example, in linear algebra the statement "three vectors cannot be linearly independent" is neither provable nor disprovable, since the dimension of a linear space may be 2 or 3 (or anything else). Thus, multivalent theories are usually incomplete.

What about univalent theories? Seemingly, these should be complete. For example, the plane Euclidean geometry should prove all geometric statements that hold on the Euclidean plane, and only such statements. Similarly, arithmetics of natural numbers should prove all true arithmetical statements about natural numbers, and only these. Unexpectedly, the situation is far more complicated than these naive ideas, see the next sections.

Generating all theorems; decidable or undecidable

It is impossible to list all theorems, since they are infinitely many. However, an endless algorithmic process can generate theorems, only theorems, and all theorems in the sense that every theorem will be generated, sooner or later. (In more technical words: the set of all theorems is recursively enumerable.)

An open question (in a mathematical theory) is a statement neither proved nor disproved. It is possible (in principle, not necessarily in practice) to run the theorem-generating algorithm waiting for one of two events: either the given statement appears to be a theorem, or its negation does; in both cases the (formerly) open question is decided. If the theory is complete, this must happen sooner or later.

A theory is called decidable if there exists an algorithm that, given an arbitrary statement (in the language of the theory) decides whether it is a theorem or not. (In more technical words: if the set of all theorems is recursive.)

The argument above shows that a complete theory is decidable.

A theory as a crystal ball?

Imagine a theory able to describe the whole discrete mathematics (the set theory?) or, at least, arbitrary algorithmic computations, in other words, formal machines (like computers, but with unlimited resources). Assume that the theorems include all true statements about these machines, and no wrong statements about them. Then a formal machine can use this theory as a "crystal ball", — for predicting the future of any formal machine. In particular, for deciding, whether a given machine will eventually halt, or will run forever (so-called halting problem). To this end it runs the theorem-generating algorithm until either the statement "such machine eventually halts" or its negation appears among the theorems.

In order to predict the future of a formal machine Y, a formal machine X needs the code of Y. It does not mean that the code of Y must be inscribed into (and therefore be shorter than) the code of X. Instead, X can generate the code of Y. This way, a single machine X can predict the future of infinitely many machines Y, and moreover, of all possible machines Y, generating their codes in an endless loop.

Can X predict its own future? It may seem that the affirmative answer is ensured by the argument above. However, how can X know, which Y is identical to X? In other words, how can X know (it means, be able to generate) its own code?

It is a well-known and nontrivial fact that some programs, so-called quine programs, can generate their own texts. It may seem to be a trick, but in fact it is a form of self-replication, and it uses a 4×109 years old patent of life: a cell replicates itself using its DNA, and also replicates the DNA itself.

Using the quine technique, a machine X can predict its own future. Is it paradoxical? Seemingly not, since X is a deterministic machine, it has no free will. But nevertheless it is paradoxical, since we may program X as follows:

  • predict your own future: whether you will eventually halt, or will run forever;
  • if "halt" is predicted, enter an infinite loop;
  • otherwise (if "run forever" is predicted), halt.

We get a paradox, which means that the assumptions made in the beginning of this section cannot be satisfied. An axiomatic theory cannot be such that its theorems include all true statements about formal machines, and no wrong statements about them! This important conclusion is closely related to the famous Gödel theorem.[1]

Notes

  1. The first incompleteness theorem of Gödel, somewhat stronger than this conclusion, states that a consistent and complete theory cannot contain Robinson arithmetics.