Data stream clustering

Data stream clustering

In computer science, data stream clustering is defined as the clustering of data that arrive continuously such as telephone records, multimedia data, financial transactions etc. Data stream clustering is usually studied under the data stream model of computation and the objective is, given a sequence of points, to construct a good clustering of the stream, using a small amount of memory and time.


Contents

History

Data stream clustering has recently attracted attention for emerging applications that involve large amounts of streaming data. For clustering, k-means is a widely used heuristic but alternate algorithms have also been developed such as k-medoids, CURE and the popular BIRCH. For data streams, one of the first results appeared in 1980 [1] but the model was formalized in 1998 [2].

Definition

The problem of data stream clustering is defined as:

Input: a sequence of n points in metric space and an integer k.
Output: k centers in the set of the n points so as to minimize the sum of distances from data points to their closest cluster centers.

This is the streaming version of the k-median problem.

Algorithms

STREAM

STREAM is an algorithm for clustering data streams described by Guha, Mishra, Motwani and O'Callaghan [3] which achieves a constant factor approximation for the k-Median problem in a single pass and using small space.

Theorem: STREAM can solve the k-Median problem on a data stream in a single pass, with time O(n1+e) and space θ(nε) up to a factor 2O(1/e), where n the number of points and e<1/2.

To understand STREAM, the first step is to show that clustering can take place in small space (not caring about the number of passes). Small-Space is a divide-and-conquer algorithm that divides the data, S, into \ell pieces, clusters each one of them (using k-means) and then clusters the centers obtained.

Small-Space Algorithm representation

Algorithm Small-Space(S)

  1. Divide S into \ell disjoint pieces X1,...,X_{\ell}.
  2. For each i, find O(k) centers in Xi. Assign each point in Xi to its closest center.
  3. Let X' be the O(\ellk) centers obtained in (2), where each center c is weighted by the number of points assigned to it.
  4. Cluster X' to find k centers.

Where, if in Step 2 we run a bicriteria (a,b)-approximation algorithm which outputs at most ak medians with cost at most b times the optimum k-Median solution and in Step 4 we run a c-approximation algorithm then the approximation factor of Small-Space() algorithm is 2c(1+2b)+2b. We can also generalize Small-Space so that it recursively calls itself i times on a successively smaller set of weighted centers and achieves a constant factor approximation to the k-median problem.

The problem with the Small-Space is that the number of subsets \ell that we partition S into is limited, since it has to store in memory the intermediate medians in X'. So, if M is the size of memory, we need to partition S into \ell subsets such that each subset fits in memory, (n/\ell) and so that the weighted \ellk centers also fit in memory, \ellk<M. But such an \ell may not always exist.

The STREAM algorithm solves the problem of storing intermediate medians and achieves better running time and space requirements. The algorithm works as follows[3]:

  1. Input the first m points; using the randomized algorithm presented in [3] reduce these to O(k) (say 2k) points.
  2. Repeat the above till we have seen m2/(2k) of the original data points. We now have m intermediate medians.
  3. Using a local search algorithm, cluster these m first-level medians into 2k second-level medians and proceed.
  4. In general, maintain at most m level-i medians, and, on seeing m, generate 2k level-i+ 1 medians, with the weight of a new median as the sum of the weights of the intermediate medians assigned to it.
  5. When we have seen all the original data points, we cluster all the intermediate medians into k final medians, using the primal dual algorithm [4].

Other Algorithms

Other well-known algorithms used for data stream clustering are:

  • BIRCH[5] : builds a hierarchical data structure to incrementally cluster the incoming points using the available memory and minimizing the amount of I/O required. The complexity of the algorithm is O(N) since one pass suffices to get a good clustering (though, results can be improved by allowing several passes).
  • COBWEB[6] : is an incremental clustering technique that keeps a hierarchical clustering model in the form of a classification tree. For each new point. COBWEB descends the tree, updates the nodes along the way and looks for the best node to put the point on (using a category utility function).

References

  1. ^ J.Munro and M. Paterson. Selection and Sorting with Limited Storage. Theoretical Computer Science, pages 315-323, 1980
  2. ^ M. Henzinger, P. Raghavan, and S. Rajagopalan. Computing on Data Streams. Digital Equipment Corporation, TR-1998-011, August 1998.
  3. ^ a b c S. Guha, N. Mishra, R. Motwani, L. O'Callaghan. Clustering Data Streams. Proceedings of the Annual Symposium on Foundations of Computer Science, 2000
  4. ^ K. Jain and V. Vazirani. Primal-dual approximation algorithms for metric facility location and k-median problems. Proc. FOCS, 1999
  5. ^ T. Zhang, R. Ramakrishnan, M. Linvy. BIRCH: An Efficient Data Clustering Method for Very Large Databases, Proceedings of the ACM SIGMOD Conference on Management of Data, 1996
  6. ^ D.H. Fisher Iterative Optimization and Simplification of Hierarchical Clusterings. Journal of AI Research, Vol 4, 1996

Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Clustering illusion — The clustering illusion refers to the tendency erroneously to perceive small samples from random distributions to have significant streaks or clusters , caused by a human tendency to underpredict the amount of variability likely to appear in a… …   Wikipedia

  • Glossaire du data mining — Exploration de données Articles principaux Exploration de données Fouille de données spatiales Fouille du web Fouille de flots de données Fouille de textes …   Wikipédia en Français

  • Fouille de flots de données — Exploration de données Articles principaux Exploration de données Fouille de données spatiales Fouille du web Fouille de flots de données Fouille de textes …   Wikipédia en Français

  • Cluster analysis — The result of a cluster analysis shown as the coloring of the squares into three clusters. Cluster analysis or clustering is the task of assigning a set of objects into groups (called clusters) so that the objects in the same cluster are more… …   Wikipedia

  • Media and Publishing — ▪ 2007 Introduction The Frankfurt Book Fair enjoyed a record number of exhibitors, and the distribution of free newspapers surged. TV broadcasters experimented with ways of engaging their audience via the Internet; mobile TV grew; magazine… …   Universalium

  • Microsoft SQL Server — Developer(s) Microsoft Stable release SQL Server 2008 R2 (10.50.2500.0 Service Pack 1) / July 11, 2011; 4 months ago …   Wikipedia

  • Exploration de données — Articles principaux Exploration de données Fouille de données spatiales Fouille du web Fouille de flots de données Fouille de textes …   Wikipédia en Français

  • Algorithme de fouille de flots de données — Exploration de données Articles principaux Exploration de données Fouille de données spatiales Fouille du web Fouille de flots de données Fouille de textes …   Wikipédia en Français

  • CR-Klassifikation — Dieser Artikel oder Abschnitt bedarf einer Überarbeitung. Näheres ist auf der Diskussionsseite angegeben. Hilf mit, ihn zu verbessern, und entferne anschließend diese Markierung. Das CR Classification ist eine von der ACM entwickelte und… …   Deutsch Wikipedia

  • CR Classification — Das CR Classification ist eine von der ACM entwickelte und gepflegte Fachklassifikation für die Informatik. Das System beinhaltet eine 3 Level Hierarchie mit einem weiteren Level zur Beschreibung. Die erste Version der Klassifikation erschien im… …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”