Statistical significance: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Robert Badgett
(Added image)
imported>Robert Badgett
No edit summary
Line 1: Line 1:
{{subpages}}
{{subpages}}
{{Image|Normal probability distribution function.gif|right|350px|Plot of the standard normal probability density function.}}
{{Image|Normal probability distribution function.gif|right|350px|Plot of the standard normal probability density function.<ref name="urlNIST/SEMATECH e-Handbook of Statistical Methods">{{cite book |url=http://www.itl.nist.gov/div898/handbook/|chapterurl=http://www.itl.nist.gov/div898/handbook/eda/section3/eda3661.htm|chapter=Normal Distribution |title=NIST/SEMATECH e-Handbook of Statistical Methods |author=Anonymous |authorlink= |coauthors= |date=2006 |location=Gaithersburg, MD |work= |publisher=National Institute of Standards and Technology |pages= |language= |archiveurl= |archivedate= |quote= |accessdate=2009-02-10}}</ref>}}
In [[statistics]], '''statistical significance''' is a "term indicating that the results obtained in an analysis of study data are unlikely to have occurred by chance, and the null hypothesis is rejected. When statistically significant, the probability of the observed results, given the null hypothesis, falls below a specified level of probability (most often P < 0.05)."<ref name="http://jamaevidence.com/glossary/S">{{cite web |url=http://jamaevidence.com/glossary/S |title=JAMAevidence Glossary |author=Anonymous |authorlink= |coauthors= |date= |format= |work= |publisher=American Medical Association |pages= |language= |archiveurl= |archivedate= |quote= |accessdate=2009-02-10}}</ref> The P-value, which is used to represent the likelihood the observed results are due to chance, is defined at "the probability, under the assumption of no effect or no difference (the null hypothesis), of obtaining a result equal to or more extreme than what was actually observed."<ref name="pmid10383371">{{cite journal |author=Goodman SN |title=Toward evidence-based medical statistics. 1: The P value fallacy |journal=Ann Intern Med |volume=130 |pages=995–1004 |year=1999 |pmid=10383371 |doi=|url=http://www.annals.org/cgi/content/full/130/12/995}}</ref>
In [[statistics]], '''statistical significance''' is a "term indicating that the results obtained in an analysis of study data are unlikely to have occurred by chance, and the null hypothesis is rejected. When statistically significant, the probability of the observed results, given the null hypothesis, falls below a specified level of probability (most often P < 0.05)."<ref name="http://jamaevidence.com/glossary/S">{{cite web |url=http://jamaevidence.com/glossary/S |title=JAMAevidence Glossary |author=Anonymous |authorlink= |coauthors= |date= |format= |work= |publisher=American Medical Association |pages= |language= |archiveurl= |archivedate= |quote= |accessdate=2009-02-10}}</ref> The P-value, which is used to represent the likelihood the observed results are due to chance, is defined at "the probability, under the assumption of no effect or no difference (the null hypothesis), of obtaining a result equal to or more extreme than what was actually observed."<ref name="pmid10383371">{{cite journal |author=Goodman SN |title=Toward evidence-based medical statistics. 1: The P value fallacy |journal=Ann Intern Med |volume=130 |pages=995–1004 |year=1999 |pmid=10383371 |doi=|url=http://www.annals.org/cgi/content/full/130/12/995}}</ref>



Revision as of 08:32, 10 February 2009

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
Video [?]
 
This editable Main Article is under development and subject to a disclaimer.
Plot of the standard normal probability density function.[1]

In statistics, statistical significance is a "term indicating that the results obtained in an analysis of study data are unlikely to have occurred by chance, and the null hypothesis is rejected. When statistically significant, the probability of the observed results, given the null hypothesis, falls below a specified level of probability (most often P < 0.05)."[2] The P-value, which is used to represent the likelihood the observed results are due to chance, is defined at "the probability, under the assumption of no effect or no difference (the null hypothesis), of obtaining a result equal to or more extreme than what was actually observed."[3]

Hypothesis testing

Usually, the null hypothesis is the there is no difference between two samples in regard to the factor being studied.[4]

Statistical errors

Two errors can occur in assessing the probability that the null hypothesis is true:

Type I error (alpha error)

Type I error, also called alpha error, is the the rejection of a correct null hypothesis. The probability of this is usually expressed by the p-value. Usually the null hypothesis is rejected if the p-value, or the chance of a type I error, is less than 5%. However, this threshold may be adjusted when multiple hypotheses are tested.[5]

Type II error (beta error)

Type II error, also called beta error, is the acceptance of an incorrect null hypothesis. This error may occur when the sample size was insufficient to have power to detect a statistically significant difference.[6][7][8]

Philosophical approaches to error testing

Frequentist method

This approach uses mathematical formulas to calculate deductive probabilities (p-value) of an experimental result.[3] This approach can generate confidence intervals.

A problem with the frequentist analyses of p-values is that they may overstate "statistical significance".[9][10]

Likelihood or Bayesian method

Some argue that the P-value should be interpreted in light of how plausible is the hypothesis based on the totality of prior research and physiologic knowledge.[11][3][12] This approach can generate Bayesian 95% credibility intervals.[13]

References

  1. Anonymous (2006). “Normal Distribution”, NIST/SEMATECH e-Handbook of Statistical Methods. Gaithersburg, MD: National Institute of Standards and Technology. Retrieved on 2009-02-10. 
  2. Anonymous. JAMAevidence Glossary. American Medical Association. Retrieved on 2009-02-10.
  3. 3.0 3.1 3.2 Goodman SN (1999). "Toward evidence-based medical statistics. 1: The P value fallacy". Ann Intern Med 130: 995–1004. PMID 10383371[e]
  4. Mosteller, Frederick; Bailar, John Christian (1992). Medical uses of statistics. Boston, Mass: NEJM Books. ISBN 0-910133-36-0.  Google Books
  5. Hochberg, Yosef (1988-12-01). "A sharper Bonferroni procedure for multiple tests of significance". Biometrika 75 (4): 800-802. DOI:10.1093/biomet/75.4.800. Retrieved on 2008-10-15. Research Blogging.
  6. Altman DG, Bland JM (August 1995). "Absence of evidence is not evidence of absence". BMJ (Clinical research ed.) 311 (7003): 485. PMID 7647644. PMC 2550545[e]
  7. Detsky AS, Sackett DL (April 1985). "When was a "negative" clinical trial big enough? How many patients you needed depends on what you found". Archives of internal medicine 145 (4): 709–12. PMID 3985731[e]
  8. Young MJ, Bresnitz EA, Strom BL (August 1983). "Sample size nomograms for interpreting negative clinical studies". Annals of internal medicine 99 (2): 248–51. PMID 6881780[e]
  9. Goodman S (1999). "Toward evidence-based medical statistics. 1: The P value fallacy.". Ann Intern Med 130 (12): 995–1004. PMID 10383371.
  10. Goodman S (1999). "Toward evidence-based medical statistics. 2: The Bayes factor.". Ann Intern Med 130 (12): 1005–13. PMID 10383350.
  11. Browner WS, Newman TB (1987). "Are all significant P values created equal? The analogy between diagnostic tests and clinical research". JAMA 257: 2459–63. PMID 3573245[e]
  12. Goodman SN (1999). "Toward evidence-based medical statistics. 2: The Bayes factor". Ann Intern Med 130: 1005–13. PMID 10383350[e]
  13. Gelfand, Alan E.; Sudipto Banerjee; Carlin, Bradley P. (2003). Hierarchical Modeling and Analysis for Spatial Data (Monographs on Statistics and Applied Probability). Boca Raton: Chapman & Hall/CRC. LCC QA278.2 .B36. ISBN 1-58488-410-X.