Granular computing

Granular computing

Granular computing is an emerging computing paradigm of information processing. It concerns the processing of complex information entities called information granules, which arise in the process of data abstraction and derivation of knowledge from information. Generally speaking, information granules are collections of entities that usually originate at the numeric level and are arranged together due to their similarity, functional adjacency, indistinguishability, coherency, or the like.

At present, granular computing is more a "theoretical perspective" than a coherent set of methods or principles. As a theoretical perspective, it encourages an approach to data that recognizes and exploits the knowledge present in data at various levels of resolution or scales. In this sense, it encompasses all methods which provide flexibility and adaptability in the resolution at which knowledge or information is extracted and represented.

Types of granulation

As mentioned above, "granular computing" is not an algorithm or process; there is not a particular method that is called "granular computing". It is rather an approach to looking at data that recognizes that different and interesting regularities in the data can appear at different levels of granularity, much as different features become salient in satellite images of greater or lesser resolution. On a low resolution satellite image, for example, one might notice interesting cloud patterns representing cyclones or other large-scale weather phenomena, while in a higher resolution image one misses these large-scale atmospheric phenomena but instead notices smaller-scale phenomena, such as the interesting pattern that is the streets of Manhattan. The same is generally true of all data: At different resolutions or granularities, different features and relationships emerge. The aim of granular computing is ultimately simply to try to take advantage of this fact in designing more effective machine learning and reasoning systems.

There are several types of granularity that are often encountered in data mining and machine learning, and we review them below:

Value granulation (discretization/quantization)

One type of granulation is the quantization of variables. It is very common that in data mining or machine learning applications that the resolution of variables needs to be "decreased" in order to extract meaningful regularities. An example of this would be a variable such as "outside temperature", (temp), which in a given application might be recorded to several decimal places of accuracy (depending on the sensing apparatus). However, for purposes of extracting relationships between "outside temperature" and, say, "number of health club applications", (club ), it will generally be advantageous to quantize "outside temperature" into a smaller number of intervals.

Motivations

There are several interrelated reasons for granulating variables in this fashion:
* Based on prior domain knowledge, we do not expect that minute variations in temperature (e.g., the difference between 80°F and 80.7°F) could have an influence on behaviors driving the number of health club applications. For this reason, any "regularity" which our learning algorithms might detect at this level of resolution would have to be "spurious", an artifact of overfitting. By coarsening the temperature variable into intervals the difference between which we "do" anticipate (based on prior domain knowledge) might influence number of health club applications, we eliminate the possibility of detecting these spurious patterns. Thus, in this case, reducing resolution is a method of controlling overfitting.
* By reducing the number of intervals in the temperature variable (i.e., increasing its "grain size"), we increase the amount of sample data indexed by each interval designation. Thus, by coarsening the variable, we increase sample sizes and achieve better statistical estimation. In this sense, increasing granularity provides an antidote to the so-called "curse of dimensionality", which relates to the exponentially decrease in statistical power with increase in number of dimensions or variable cardinality.
*Independent of prior domain knowledge, it is often the case that meaningful regularities (i.e., which can be detected by a given learning methodology, representational language, etc.) may exist at one level of resolution and not at another.

For example, a simple learner or pattern recognition system may seek to extract regularities satisfying a conditional probability threshold such as p(Y=y_j|X=x_i) ge alpha . In the special case where alpha = 1 , this recognition system is essentially detecting "logical implication" of the form X=x_i ightarrow Y=y_j or, in words, "if X=x_i, then Y=y_j ". The systems ability to recognize such implications (or, in general, conditional probabilities exceeding threshold) is partially contingent on the resolution with which the system analyzes the variables.

As an example of this last point, consider the feature space shown to the right. The variables may each be regarded at two different resolutions. Variable X may be regarded at a high (quaternary) resolution wherein it takes on the four values {x_1, x_2, x_3, x_4} or at a lower (binary) resolution wherein it takes on the two values {X_1, X_2}. Similarly, variable Y may be regarded at a high (quaternary) resolution or at a lower (binary) resolution, where it takes on the values {y_1, y_2, y_3, y_4} or {Y_1, Y_2}, respectively. It will be noted that at the high resolution, there are no detectable implications of the form X=x_i ightarrow Y=y_j , since every x_i is associated with more than one y_j, and thus, for all x_i, p(Y=y_j|X=x_i) < 1 . However, at the low (binary) variable resolution, two bilateral implications become detectable: X=X_1 leftrightarrow Y=Y_1 and X=X_2 leftrightarrow Y=Y_2 , since every X_1 occurs "iff" Y_1 and X_2 occurs "iff" Y_2. Thus, a pattern recognition system scanning for implications of this kind would find them at the binary variable resolution, but would fail to find them at the higher quaternary variable resolution.

Issues and methods

It is not feasible to exhaustively test all possible discretization resolutions on all variables in order to see which combination of resolutions yields interesting or significant results. Instead, the feature space must be preprocessed (often by an entropy analysis of some kind) so that some guidance can be given as to how the discretization process should proceed. Moreover, one cannot generally achieve good results by naively analyzing and discretizing each variable independently, since this may obliterate the very interactions that we had hoped to discover.

A sample of papers that address the problem of variable discretization in general, and multiple-variable discretization in particular, are as follows: Harvtxt|Chiu|Wong|Cheung|1991, Harvtxt|Bay|2001, Harvtxt|Liu|Hussain|Tan|Dasii|2002, Harvtxt|Wang|Liu|1998, Harvtxt|Zighed|Rabaséda|Rakotomalala|1998, Harvtxt|Catlett|1991, Harvtxt|Dougherty|Kohavi|Sahami|1995, Harvtxt|Monti|Cooper|1999, Harvtxt|Fayyad|Irani|1993, Harvtxt|Chiu|Cheung|Wong|1990, Harvtxt|Nguyen|Nguyen|1998, Harvtxt|Grzymala-Busse|Stefanowski|2001, Harvtxt|Ting|1994, Harvtxt|Ludl|Widmer|2000, Harvtxt|Pfahringer|1995, Harvtxt|An|Cercone|1999, Harvtxt|Chiu|Cheung|1989, Harvtxt|Chmielewski|Grzymala-Busse|1996, Harvtxt|Lee|Shin|1994.

Variable granulation (clustering/aggregation/transformation)

Variable granulation is a term that could describe a variety of techniques, most of which are aimed at reducing dimensionality, redundancy, and storage requirements. We briefly describe some of the ideas here, and present pointers to the literature.

Variable transformation

A number of classical methods, such as principal component analysis, multidimensional scaling, factor analysis, and structural equation modeling, and their relatives, fall under the genus of "variable transformation." Also in this category are more modern areas of study such as dimensionality reduction, projection pursuit, and independent component analysis. The common goal of these methods in general is to find a representation of the data in terms of new variables, which are a linear or nonlinear transformation of the original variables, and in which important statistical relationships emerge. The resulting variable sets are almost always smaller than the original variable set, and hence these methods can be loosely said to impose a granulation on the feature space. These dimensionality reduction methods are all reviewed in the standard texts, such as Harvtxt|Duda|Hart|Stork|2001, Harvtxt|Witten|Frank|2005, and Harvtxt|Hastie|Tibshirani|Friedman|2001.

Variable aggregation

A different class of variable granulation methods derive more from data clustering methodologies than from the linear systems theory informing the above methods. It was noted fairly early that one may consider "clustering" related variables in just the same way that one considers clustering related data. In data clustering, one identifies a group of similar entities (using a measure of "similarity" suitable to the domain), and then in some sense "replaces" those entities with a prototype of some kind. The prototype may be the simple average of the data in the identified cluster, or some other representative measure. But the key idea is that in subsequent operations, we may be able to use the single prototype for the data cluster (along with perhaps a statistical model describing how exemplars are derived from the prototype) to "stand in" for the much larger set of exemplars. These prototypes are generally such as to capture most of the information of interest concerning the entities.

Similarly, it is reasonable to ask whether a large set of variables might be aggregated in to a smaller set of "prototype" variables that capture the most salient relationships between the variables. Although variable clustering methods based on linear correlation have been proposed (Harvnb|Duda|Hart|Stork|2001;Harvnb|Rencher|2002), more powerful methods of variable clustering are based on the mutual information between variables. Watanabe has shown (Harvnb|Watanabe|1960;Harvnb|Watanabe|1969) that for any set of variables one can construct a "polytomic" (i.e., n-ary) tree representing a series of variable agglomerations in which the ultimate "total" correlation among the complete variable set is the sum of the "partial" correlations exhibited by each agglomerating subset (see figure). Watanabe suggests that an observer might seek to thus partition a system in such a way as to minimize the interdependence between the parts "... as if they were looking for a natural division or a hidden crack."

One practical approach to building such a tree is to successively choose for agglomeration the two variables (either atomic variables or previously agglomerated variables) which have the highest pairwise mutual information Harv|Kraskov|Stögbauer|Andrzejak|Grassberger|2003. The product of each agglomeration is a new (constructed) variable that reflects the local joint distribution of the two agglomerating variables, and thus possesses an entropy equal to their joint entropy.(From a procedural standpoint, this agglomeration step involves replacing two columns in the attribute-value table—representing the two agglomerating variables—with a single column that has a unique value for every unique combination of values in the replaced columns Harv|Kraskov|Stögbauer|Andrzejak|Grassberger|2003. No information is lost by such an operation; however, it should be noted that if one is exploring the data for inter-variable relationships, it would generally "not" be desirable to merge redundant variables in this way, since in such a context it is liklely to be precisely the redundancy or "dependency" between variables that is of interest; and once redundant variables are merged, their relationship to one another can no longer be studied.

See also OLAP aggregation for an application of aggregation in database systems.

Concept granulation (component analysis)

The origins of the "granular computing" ideology are to be found in the rough sets and fuzzy sets literatures. One of the key insights of rough set research—although by no means unique to it—is that, in general, the selection of different sets of features or variables will yield different "concept" granulations. Here, as in elementary rough set theory, by "concept" we mean a set of entities that are "indistinguishable" or "indiscernible" to the observer (i.e., a simple concept), or a set of entities that is composed from such simple concepts (i.e., a complex concept). To put it in other words, by projecting a data set (value-attribute system) onto different sets of variables, we recognize alternative sets of equivalence-class "concepts" in the data, and these different sets of concepts will in general be conducive to the extraction of different relationships and regularities.

Equivalence class granulation

We illustrate with an example. Consider the attribute-value system below:

:

The objects that can be "definitively" categorized according to concept structure [x] _Q based on [x] _P are those in the set {O_{1},O_{2},O_{3},O_{7},O_{8},O_{10}}, and since there are six of these, the dependency of Q on P, gamma_{P}(Q) = 6/10. This might be considered an interesting dependency in its own right, but perhaps in a particular data mining application only stronger dependencies are desired.

We might then consider the dependency of the smaller attribute set Q = {P_4}on the attribute set P = {P_2, P_3}. The move from Q = {P_4, P_5} to Q = {P_4} induces a coarsening of the class structure [x] _Q, as will be seen shortly. We wish again to know what proportion of objects can be correctly classified into the (now larger) classes of [x] _Q based on knowledge of [x] _P. The equivalence classes of the new [x] _Q and of [x] _P are shown below.

:

Clearly, [x] _Q has a coarser granularity than it did earlier. The objects that can now be "definitively" categorized according to the concept structure [x] _Q based on [x] _P constitute the complete universe {O_{1},O_{2},ldots,O_{10}}, and thus the dependency of Q on P, gamma_{P}(Q) = 1. That is, knowledge of membership according to category set [x] _P is adequate to determine category membership in [x] _Q with complete certainty; In this case we might say that P ightarrow Q. Thus, by coarsening the concept structure, we were able to find a stronger (deterministic) dependency. However, we also note that the classes induced in [x] _Q from the reduction in resolution necessary to obtain this deterministic dependency are now themselves large and few in number; as a result, the dependency we found, while strong, may be less valuable to us than the weaker dependency found earlier under the higher resolution view of [x] _Q.

In general it is not possible to test all sets of attributes to see which induced concept structures yield the strongest dependencies, and this search must be therefore be guided with some intelligence. Papers which discuss this issue, and others relating to intelligent use of granulation, are those by Y.Y. Yao and Lofti Zadeh listed in the #References below.

Component granulation

Another perspective on concept granulation may be obtained from work on parametric models of categories. In mixture model learning, for example, a set of data is explained as a mixture of distinct Gaussian (or other) distributions. Thus, a large amount of data is "replaced" by a small number of distributions. The choice of the number of these distributions, and their size, can again be viewed as a problem of "concept granulation". In general, a better fit to the data is obtained by a larger number of distributions or parameters, but in order to extract meaningful patterns, it is necessary to constrain the number of distributions, thus deliberately "coarsening" the concept resolution. Finding the "right" concept resolution is a tricky problem for which many methods have been proposed (e.g., AIC, BIC, MDL, etc.), and these are frequently considered under the rubric of "model regularization".

Different Interpretations of Granular Computing

Granular computing can be conceived as a framework of theories, methodologies, techniques, and tools that make use of information granules in the process of problem solving. In this sense, granular computing is used as an umbrella term to cover topics that have been studied in various fields in isolation. By examining all of these existing studies in light of the unified framework of granular computing and extracting their commonalities, it may be possible to develop a general theory for problem solving.

In a more philosophical sense, granular computing can describe a way of thinking that relies on the human ability to perceive the real world under various levels of granularity (i.e., abstraction) in order to abstract and consider only those things that serve a specific interest and to switch among different granularities. By focusing on different levels of granularity, one can obtain different levels of knowledge, as well as a greater understanding of the inherent knowledge structure. Granular computing is thus essential in human problem solving and hence has a very significant impact on the design and implementation of intelligent systems.

See also

* Rough set, Discretization

References


*Harvard reference | Surname1=An| Given1=Aijun |Surname2=Cercone| Given2=Nick|Year= 1999| Chapter=Discretization of continuous attributes for learning classification rules | Editor=Ning Zhong & Lizhu Zhou | Title=Methodologies for Knowledge Discovery and Data Mining: Proceedings of the Third Pacific-Asia Conference, PAKDD-99 | Edition= | Publisher= | Place=Beijing, China | URL=http://www.springerlink.com/content/l56xxg751cjx2hu7/ | Access-date=|Pages=509–514.

*Bargiela, A. and Pedrycz, W. (2003) "Granular Computing. An introduction", Kluwer Academic Publishers

*Harvard reference | Surname=Bay| Given=Stephen D. | Authorlink= | Title=Multivariate discretization for set mining | Journal=Knowledge and Information Systems | Volume=3 | Issue=4 | Year=2001| Page=491–512 | URL=http://www.springerlink.com/content/x2ceg05lgaecqfcg/.

*Harvard reference | Surname=Catlett| Given=J.|Year= 1991| Chapter=On changing continuous attributes into ordered discrete attributes | Editor=Y. Kodratoff | Title=Machine Learning—EWSL-91: European Working Session on Learning | Edition= | Publisher= | Place=Porto, Portugal | URL=http://portal.acm.org/citation.cfm?coll=GUIDE&dl=GUIDE&id=112164 | Access-date=|Pages=164–178.

*Harvard reference | Surname1=Chiu| Given1=David K. Y. | Surname2=Cheung| Given2=Benny |Year= 1989| Chapter=Hierarchical maximum entropy discretization | Editor=Ryszard Janicki & Waldemar W. Koczkodaj | Title=Computing and Information: Proceedings of the International Conference on Computing and Information (ICCI '89) | Edition= | Publisher=North-Holland | Place=Toronto, Canada | URL= | Access-date=|Pages=237–242.

*Harvard reference | Surname1=Chiu| Given1=David K. Y. | Surname3=Wong| Given3=Andrew K. C.|Surname2=Cheung| Given2=Benny | Authorlink= | Title=Information synthesis based on hierarchical maximum entropy discretization | Journal=Journal of Experimental and Theoretical Artificial Intelligence | Volume=2 | Issue= | Year=1990| Page=117–129 | URL=.

*Harvard reference | Surname1=Chiu| Given1=David K. Y. | Surname2=Wong| Given2=Andrew K. C.|Surname3=Cheung| Given3=Benny |Year= 1991| Chapter=Information discovery through hierarchical maximum entropy discretization and synthesis | Editor=Gregory Piatetsky-Shapiro & William J. Frawley | Title=Knowledge Discovery in Databases | Edition= | Publisher=MIT Press | Place=Cambridge, MA | URL= | Access-date=|Pages=126–140.

*Harvard reference | Surname1=Chmielewski| Given1=Michal R. | Surname2=Grzymala-Busse| Given2=Jerzy W.| Authorlink= | Title=Global discretization of continuous attributes as preprocessing for machine learning | Journal=International Journal of Approximate Reasoning | Volume=15 | Issue= | Year=1996| Page=319–331 | URL=http://kuscholarworks.ku.edu/dspace/bitstream/1808/412/1/j36-draft.pdf.

*Harvard reference | Surname1=Dougherty| Given1=James | Surname2=Kohavi| Given2=Ron | Surname3=Sahami| Given3=Mehran| Year=1995|Chapter=Supervised and unsupervised discretization of continuous features | Editor=Armand Prieditis & Stuart Russell | Title=Machine Learning: Proceedings of the Twelfth International Conference (ICML 1995) | Edition= | Publisher=Morgan Kaufmann | Place=Tahoe City, CA | URL=http://citeseer.ist.psu.edu/dougherty95supervised.html | Access-date= |Pages=194–202.

*Harvard reference | Surname1=Duda| Given1=Richard O.| Surname2=Hart| Given2=Peter E. | Surname3=Stork| Given3=David G. |Title=Pattern Classification| Publisher=John Wiley & Sons| Place=New York | Year=2001| Edition=2|URL=http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471056693.html

*Harvard reference | Surname1=Fayyad| Given1=Usama M.| Surname2=Irani| Given2=Keki B.| Year=1993|Chapter=Multi-interval discretization of continuous-valued attributes for classification learning | Editor=edited volume | Title=Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-93) | Edition= | Publisher= | Place=Chambéry, France | URL= | Access-date= |Pages=1022–1027.

*Harvard reference | Surname1=Grzymala-Busse| Given1=Jerzy W. | Surname2=Stefanowski| Given2=Jerzy| Authorlink= | Title=Three discretization methods for rule induction | Journal=International Journal of Intelligent Systems | Volume=16 | Issue=1 | Year=2001| Page=29–38 | URL=http://www3.interscience.wiley.com/cgi-bin/abstract/76501018/ABSTRACT?CRETRY=1&SRETRY=0.

*Harvard reference | Surname1=Hastie| Given1=Trevor | Surname2=Tibshirani| Given2=Robert | Surname3=Friedman| Given3=Jerome |Title=The Elements of Statistical Learning: Data Mining, Inference, and Prediction| Publisher=Springer| Place=New York | Year=2001| URL=http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-2190214-0

*Harvard reference | Surname1=Kraskov| Given1=Alexander | Surname2=Stögbauer| Given2=Harald | Surname3=Andrzejak| Given3=Ralph G. | Surname4=Grassberger| Given4=Peter | Authorlink= | Title=Hierarchical clustering based on mutual information | Journal=q-bio.QM/0311039 manuscript | Volume= | Issue= | Year=2003| Page= | URL=http://arxiv.org/abs/q-bio/0311039.

*Harvard reference | Surname1=Lee| Given1=Changhwan | Surname2=Shin| Given2=Dong-Guk| Year=1994|Chapter=A context-sensitive discretization of numeric attributes for classification learning | Editor=A. G. Cohn | Title=Proceedings of the 11th European Conference on Artificial Intelligence (ECAI 94) | Edition= | Publisher= | Place=Amsterdam, The Netherlands | URL= | Access-date= |Pages=428–432.

*Harvard reference | Surname1=Liu| Given1=Huan | Surname2=Hussain| Given2=Farhad | Surname3=Tan| Given3=Chew Lim| Surname4=Dasii| Given4=Manoranjan| Authorlink= | Title=Discretization: An enabling technique | Journal=Data Mining and Knowledge Discovery | Volume=6 | Issue=4 | Year=2002| Page=393–423 | URL=http://www.springerlink.com/content/tuxy32pw4lg6832m/.

*Harvard reference | Surname1=Ludl| Given1=Marcus-Christopher| Surname2=Widmer| Given2=Gerhard |Year=2000|Chapter=Relative unsupervised discretization for association rule mining | Editor=Djamel A. Zighed, Jan Komorowski & Jan Zytkow | Title=Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery (PKDD 2000) | Edition= | Publisher= | Place=Lyon, France | URL=http://www.springerlink.com/content/37yrbu7fyg7484lt/ | Access-date= |Pages=148–158.

*Harvard reference | Surname1=Monti| Given1=Stefano | Surname2=Cooper| Given2=Gregory F.|Year=1999|Chapter=A latent variable model for multivariate discretization | Editor=edited volume| Title=Uncertainty 99: The 7th International Workshop on Artificial Intelligence and Statistics | Edition= | Publisher= | Place=Fort Lauderdale, FL | URL=http://citeseer.ist.psu.edu/monti99latent.html | Access-date= .

*Harvard reference | Surname1=Nguyen| Given1=Hung Son| Surname2=Nguyen| Given2=Sinh Hoa|Year=1998|Chapter=Discretization methods in data mining | Editor=Lech Polkowski & Andrzej Skowron | Title=Rough Sets in Knowledge Discovery 1: Methodology and Applications | Edition= | Publisher=Physica-Verlag | Place=Heidelberg | URL= | Access-date= |Pages=451–482.

*Harvard reference | Surname=Pfahringer| Given=Bernhard|Year=1995|Chapter=Compression-based discretization of continuous attributes| Editor=Armand Prieditis & Stuart Russell | Title=Machine Learning: Proceedings of the Twelfth International Conference (ICML 1995) | Edition= | Publisher=Morgan Kaufmann | Place=Tahoe City, CA | URL=http://citeseer.ist.psu.edu/pfahringer95compressionbased.html | Access-date= |Pages=456–463.

*Harvard reference | Surname=Rencher| Given=Alvin C. | Title=Methods of Multivariate Analysis | Publisher=Wiley | Place=New York | Year=2002| URL=.

*Harvard reference
Surname1=Simon
Given1=Herbert A.
Surname2=Ando
Given2=Albert
Year= 1963
Chapter=Aggregation of variables in dynamic systems
Editor=Albert Ando, Franklin M. Fisher, & Herbert A. Simon
Title=Essays on the Structure of Social Science Models
Edition=
Publisher=MIT Press
Place=Cambridge, MA
Pages=64-91
URL=
Access-date=

*Harvard reference
Surname=Simon
Given=Herbert A.
Year= 1996
Chapter=The architecture of complexity: Hierarchic systems
Editor=Herbert A. Simon
Title=The Sciences of the Artificial
Edition=2
Publisher=MIT Press
Place=Cambridge, MA
URL=
Access-date=
Pages=183-216

*Harvard reference | Surname=Ting| Given=Kai Ming | Title=Discretization of continuous-valued attributes and instance-based learning (Technical Report No.491) | Publisher=Basser Department of Computer Science | Place=Sydney | Year=1994| URL=http://citeseer.ist.psu.edu/145651.html .

*Harvard reference | Surname1=Wang| Given1=Ke | Surname2=Liu| Given2=Bing|Year=1998|Chapter=Concurrent discretization of multiple attributes | Editor=Springer| Title=Proceedings of the 5th Pacific Rim International Conference on Artificial Intelligence | Edition= | Publisher=Springer-Verlag | Place=London | URL=http://citeseer.ist.psu.edu/wang98concurrent.html | Access-date= |Pages=250–259.

*Harvard reference | Surname=Watanabe| Given=Satosi| Authorlink= | Title=Information theoretical analysis of multivariate correlation | Journal=IBM Journal of Research and Development | Volume=4 | Issue=1 | Year=1960| Page=66–82 | URL=.

*Harvard reference | Surname=Watanabe| Given=Satosi | Title=Knowing and Guessing: A Quantitative Study of Inference and Information | Publisher=Wiley | Place=New York | Year=1969| URL=.

*Harvard reference | Surname1=Witten| Given1=Ian H. | Surname2=Frank| Given2=Eibe |Title=Data Mining: Practical Machine Learning Tools and Techniques | Publisher=Morgan Kaufmann | Place=Amsterdam | Edition=2| Year=2005| URL=http://www.cs.waikato.ac.nz/~ml/weka/book.html

*Yao, Y.Y. (2004) "A Partition Model of Granular Computing", Lecture Notes in Computer Science (to appear)

*cite conference
first = Y. Y.
last = Yao
title = On modeling data mining with granular computing
booktitle = Proceedings of the 25th Annual International Computer Software and Applications Conference (COMPSAC 2001)
pages = 638–643
date = 2001
url = http://portal.acm.org/citation.cfm?id=675398

*cite conference
first = Yiyu
last = Yao
title = Granular computing for data mining
booktitle = Proceedings of the SPIE Conference on Data Mining, Intrusion Detection, Information Assurance, and Data Networks Security
date = 2006
editor = Dasarathy, Belur V.
url = http://www2.cs.uregina.ca/~yyao/PAPER_PDF/grcfordm06.pdf

*cite conference
first = J. T.
last = Yao
coauthors = Yao, Y. Y.
title = Induction of classification rules by granular computing
booktitle = Proceedings of the Third International Conference on Rough Sets and Current Trends in Computing (TSCTC'02)
pages = 331–338
publisher = Springer-Verlag
date = 2002
location = London, UK
url = http://www2.cs.uregina.ca/~jtyao/Papers/53_RSCTC02.pdf

*Zadeh, L.A. (1997) "Toward a Theory of Fuzzy Information Granulation and its Centrality in Human Reasoning and Fuzzy Logic", Fuzzy Sets and Systems", 90:111-127

*Harvard reference | Surname1=Zighed| Given1=D. A.| Surname2=Rabaséda| Given2=S.| Surname3=Rakotomalala| Given3=R.| Authorlink= | Title=FUSINTER: A method for discretization of continuous attributes | Journal=International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems | Volume=6 | Issue=3 | Year=1998| Page=307–326 | URL=http://portal.acm.org/citation.cfm?id=353472.


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Theme (computing) — In computing, a theme is a preset package containing graphical appearance details, used to customize the look and feel of (typically) an operating system, widget set or window manager. Graphics themes for individual applications are often… …   Wikipedia

  • Granularity — is the extent to which a system is broken down into small parts, either the system itself or its description or observation. It is the extent to which a larger entity is subdivided. For example, a yard broken into inches has finer granularity… …   Wikipedia

  • Rough set — A rough set originated by prof. Zdzisław I. Pawlak is a formal approximation of a crisp set (i.e., conventional set ) in terms of a pair of sets which give the lower and the upper approximation of the original set. The lower and upper… …   Wikipedia

  • Knowledge retrieval — is a field of study which seeks to return information in a structured form, consistent with human cognitive processes as opposed to simple lists of data items. It draws on a range of fields including Epistemology (Theory of knowledge), Cognitive… …   Wikipedia

  • Discretization — A solution to a discretized partial differential equation, obtained with the finite element method. In mathematics, discretization concerns the process of transferring continuous models and equations into discrete counterparts. This process is… …   Wikipedia

  • Dominance-based Rough Set Approach — (DRSA) is an extension of rough set theory for Multi Criteria Decision Analysis (MCDA), introduced by Greco, Matarazzo and Słowiński Greco, S., Matarazzo, B., Słowiński, R.: Rough sets theory for multicriteria decision analysis. European Journal… …   Wikipedia

  • Dominance-based rough set approach — (DRSA) is an extension of rough set theory for multi criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński. [1][2][3] The main change comparing to the classical rough sets is the substitution of the indiscernibility… …   Wikipedia

  • Attribute-value system — An attribute value system is a basic knowledge representation framework comprising a table with columns designating attributes (also known as properties , predicates, features, dimensions, characteristics or independent variables depending on the …   Wikipedia

  • Social rule system theory — is an attempt to formally approach different kinds of social rule systems in a unified manner. Social rules systems include institutions such as norms, laws, regulations, taboos, customs, and a variety of related concepts and are important in the …   Wikipedia

  • Decision-theoretic rough sets — (DTRS) is a probabilistic extension of rough set classification. First created in 1990 by Dr. Yiyu Yao[1], the extension makes use of loss functions to derive and region parameters. Like rough sets, the lower and upper approximations of a set are …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”