Hierarchical clustering: Difference between revisions
imported>Anthony.Sebastian (add ref section) |
mNo edit summary |
||
(One intermediate revision by one other user not shown) | |||
Line 2: | Line 2: | ||
'''Hierarchical clustering''' (also known as [[numerical]] [[taxonomy]]) is a branch of [[cluster analysis]]<ref>[http://www.resample.com/xlminer/help/HClst/HClst_intro.htm Hierarchical Clustering: Cluster Analysis.] | '''Hierarchical clustering''' (also known as [[numerical]] [[taxonomy]]) is a branch of [[cluster analysis]]<ref>[http://www.resample.com/xlminer/help/HClst/HClst_intro.htm Hierarchical Clustering: Cluster Analysis.] | ||
* "Cluster Analysis, also called data segmentation, has a variety of goals. All relate to grouping or segmenting a collection of objects (also called observations, individuals, cases, or data rows) into subsets or "clusters", such that those within each cluster are more closely related to one another than objects assigned to different clusters. Central to all of the goals of cluster analysis is the notion of degree of similarity (or dissimilarity) between the individual objects being clustered."</ref> which treats clusters hierarchically, i.e. as a set of levels. The construction of the hierarchy can be performed using two major approaches, or combinations thereof: In agglomerative hierarchical clustering (a [[bottom-up]] approach), existing clusters are merged [[iteration|iteratively]], while divisive hierarchical clustering (a [[top-down]] approach) starts out with all data in one cluster that is then split iteratively. At each step of the process, a mathematical measure of [[distance]] or [[similarity]] between (agglomerative) or within clusters (divisive) is being computed to determine how to split or merge. Several different distance and similarity measures can be used, which generally result in different hierarchies (especially for agglomerative ones which start out based on local information only), thus complicating their interpretation. Nonetheless, hierarchical clustering is more intuitively understandable than [[flat clustering]], and so it enjoys considerable popularity for multivariate analysis of data, e.g. of [[gene]] or [[protein]] [[sequence]]s. | * "Cluster Analysis, also called data segmentation, has a variety of goals. All relate to grouping or segmenting a collection of objects (also called observations, individuals, cases, or data rows) into subsets or "clusters", such that those within each cluster are more closely related to one another than objects assigned to different clusters. Central to all of the goals of cluster analysis is the notion of degree of similarity (or dissimilarity) between the individual objects being clustered."</ref> which treats clusters hierarchically, i.e., as a set of levels. The construction of the hierarchy can be performed using two major approaches, or combinations thereof: In agglomerative hierarchical clustering (a [[bottom-up]] approach), existing clusters are merged [[iteration|iteratively]], while divisive hierarchical clustering (a [[top-down]] approach) starts out with all data in one cluster that is then split iteratively. At each step of the process, a mathematical measure of [[distance]] or [[similarity]] between (agglomerative) or within clusters (divisive) is being computed to determine how to split or merge. | ||
Several different distance and similarity measures can be used, which generally result in different hierarchies (especially for agglomerative ones which start out based on local information only), thus complicating their interpretation. Nonetheless, hierarchical clustering is more intuitively understandable than [[flat clustering]], and so it enjoys considerable popularity for multivariate analysis of data, e.g. of [[gene]] or [[protein]] [[sequence]]s. | |||
==References and notes== | ==References and notes== | ||
<div class="references-small" style="-moz-column-count:2; column-count:2;"> | <div class="references-small" style="-moz-column-count:2; column-count:2;"> | ||
<references /> | <references /> | ||
</div> | </div>[[Category:Suggestion Bot Tag]] |
Latest revision as of 16:00, 27 August 2024
Hierarchical clustering (also known as numerical taxonomy) is a branch of cluster analysis[1] which treats clusters hierarchically, i.e., as a set of levels. The construction of the hierarchy can be performed using two major approaches, or combinations thereof: In agglomerative hierarchical clustering (a bottom-up approach), existing clusters are merged iteratively, while divisive hierarchical clustering (a top-down approach) starts out with all data in one cluster that is then split iteratively. At each step of the process, a mathematical measure of distance or similarity between (agglomerative) or within clusters (divisive) is being computed to determine how to split or merge.
Several different distance and similarity measures can be used, which generally result in different hierarchies (especially for agglomerative ones which start out based on local information only), thus complicating their interpretation. Nonetheless, hierarchical clustering is more intuitively understandable than flat clustering, and so it enjoys considerable popularity for multivariate analysis of data, e.g. of gene or protein sequences.
References and notes
- ↑ Hierarchical Clustering: Cluster Analysis.
- "Cluster Analysis, also called data segmentation, has a variety of goals. All relate to grouping or segmenting a collection of objects (also called observations, individuals, cases, or data rows) into subsets or "clusters", such that those within each cluster are more closely related to one another than objects assigned to different clusters. Central to all of the goals of cluster analysis is the notion of degree of similarity (or dissimilarity) between the individual objects being clustered."