Probability
Probability is a number representing an estimate of how likely an event is, ranging from 1.0 representing certainty down to 0. for impossibility.
Probability is the topic of probability theory, a branch of mathematics concerned with analysis of random phenomena. Like algebra, geometry and other parts of mathematics, probability theory has its origins in the natural world. Humans routinely deal with incomplete and/or uncertain information in daily life: in decisions such as crossing the road ("will this approaching car respect the red light?"), eating food ("am I certain this food is not contaminated?"), and so on. Probability theory is a mathematical tool intended to formalize this ubiquitous mental process. The probability concept is a part of this theory, and is intended to formalize uncertainty.
There are three basic ways to think about the probability concept:
- Bayesian probability.
- Frequentist probability.
- Axiomatic probability (Kolmogorov's axioms).
Bayesian probability
In this approach, probability is taken as a measure of how reasonable a belief is in light of experience or observations. It is based on a rigorous relationship between what are called conditional probabilities and ordinary (non-conditional) probability. It is thus, not simply an intuitive or educated "guess", but something much more specific and precise.
Example of the Bayesian viewpoint
How likely is it that it will rain today? If I'm inside a room with no windows and cannot look outside and see whether there are any clouds in the sky or wshether the wind is blowing, then I do not have this information available, and cannot use it to inform my estimate of how likely it is that it going to rain. But of the last 100 cloudy days I've experienced, I've noticed that it rained on 20 of them, and that of days without a cloud in the sky it has only rained on five (because a storm blew in later in the day), I will conclude that rain is more likely on cloudy days. In fact, this can be made precise using a formula known as Bayes' theorem which expresses the probability of rain given that it is cloudy in terms of the probability that it will rain on a cloudy day (something I can estimate by direct observation), the probability that it will rain on a given day, and the probability that it will be cloudy on a given day (both of which I can estimate by direct observation).
Frequentist probability
In this approach one views probabilities as the proportion of identical (or as nearly identical as we can manage) experiments will have a given outcome.
Example of the frequentist viewpoint
The classic example here is flipping a coin. If out of 1000 coin flips, 501 are "heads" and 499 are "tails", a frequentist will say that (based on this experiment) the probability of heads is .501. Now, if the outcome is trculy random, then if we flip the coin 10,000 times (or 100,000 times), the proportion of heads will come even closer to .5. The .501 we derived by flipping our coin 1000 times is only an estimate of the true probability. The difficulty is that we can only carry out experiments a finite number of times, so the frequentist approach doesn't tell us exactly what the probability should be, either.
The Axiomatic Approach
Neither the Bayesian or the frequentist approach really tells us how to compute probabilities (though we can estimate them). But just as importantly, they don't really give us a satisfactory explanation of what probability is. The axiomattic approach takes a different tack. Instead of focusing on the question "What is probability?" we step back and ask "How does probability work?" The set of rules we expect probability to follow is known as Kolmogorov's axioms. But the ultimate justification for this approach rests on experience, too. If Kolmogorov's axioms le us to conclude that coins never came up tails, then we would naturally conclude that something is wrong. Fortunately, the results e derive by applying these axioms accord very nicely with experience. We can even derive Bayes' theorem as a consequence, and we can show that in a large number of trials of n experiment, the frequency of one outcome (often called "success") will give us a good estimate of the probability, and that estimte will become better and better as the number of trials increases &em; or at least it will do so on the average.
Example of the axiomatic approach
If the probability that a card (drawn at random from a standard deck of cards) is a heart is .25, and the probability that it is a spade is also .25, and if I know that the being a heart and being a spade are mutually exclusive possibilities (i.e., a card cannot be both), then the probability that it is a heart or a spade is .5 = 0.25 + 0.25.
More technical information
- Bayes' theorem
- principle of maximum entropy
- Probability distributions
- Kolmogorov's axioms
- probability theory
Links
External links
- Intros