Reciprocity (electromagnetism): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Chjoaygame
Undid revision 597514961 by Chjoaygame (talk)undid inadequate edit
Line 1: Line 1:
{{more footnotes|date=August 2009}}
[http://www.ninfeta.tv/blog/66912 home std test kit] The name of the writer is Figures but it's not the most masucline title out there. South Dakota is exactly std home test where I've always been residing. One of the at home std testing extremely  [http://fankut.com/index.php?do=/profile-2572/info/ fankut.com] very best things in the globe for me is to do aerobics and now I'm trying to make cash with it. For years he's been working as a meter reader and it's something he really appreciate.<br><br>my web page: [http://blog.catherinecoaches.com/2011/11/the-hollywood-herpes-tree-celebrities-who-have-herpes-allegedly.html std testing] at home ([http://gcjcteam.org/index.php?mid=etc_video&document_srl=655020&sort_index=regdate&order_type=desc This Webpage])
{{Cleanup|reason=Rough merger, article consistency needs copyediting|date=October 2011}}
 
'''Fuzzy clustering''' is a class of [[algorithm]]s for [[cluster analysis]] in which the allocation of data points to clusters is not "hard" (all-or-nothing) but "fuzzy" in the same sense as [[fuzzy logic]].  
== Explanation of clustering ==
 
[[Data clustering]] is the process of dividing data elements into classes or clusters so that items in the same class are as similar as possible, and items in different classes are as dissimilar as possible. Depending on the nature of the data and the purpose for which clustering is being used, different measures of similarity may be used to place items into classes, where the similarity measure controls how the clusters are formed. Some examples of measures that can be used as in clustering include distance, connectivity, and intensity.
 
In [[hard clustering]], data is divided into distinct clusters, where each data element belongs to exactly one cluster. In '''fuzzy clustering''' (also referred to as '''soft clustering'''), data elements can belong to more than one cluster, and associated with each element is a set of membership levels. These indicate the strength of the association between that data element and a particular cluster. Fuzzy clustering is a process of assigning these membership levels, and then using them to assign data elements to one or more clusters.
 
One of the most widely used fuzzy clustering algorithms is the [[#Fuzzy c-means clustering|Fuzzy C-Means]] (FCM) Algorithm
(Bezdek 1981). The FCM algorithm attempts to partition a finite collection of n elements
<math>X = \{ x_1, . . . , x_n \}</math> into a collection of c fuzzy clusters with respect to some given criterion.
Given a finite set of data, the algorithm returns a list of c cluster centres <math>C = \{ c_1, . . . , c_c \}</math> and a partition matrix  <math>W = w_{i,j} \in[0, 1],\; i = 1, . . . , n,\; j = 1, . . . , c</math>, where each element w<sub>ij</sub> tells
the degree to which element x<sub>i</sub> belongs to cluster c<sub>j</sub> . Like the k-means algorithm, the FCM
aims to minimize an objective function. The standard function is:
:<math>w_k(x) = \frac{1}{\sum_j \left(\frac{d(\mathrm{center}_k,x)}{d(\mathrm{center}_j,x)}\right)^{2/(m-1)}}.</math>
which differs from the k-means objective function by the addition of the membership values
u<sub>ij</sub> and the fuzzifier m. The fuzzifier m determines the level of cluster fuzziness. A large
m results in smaller memberships w<sub>ij</sub> and hence, fuzzier clusters. In the limit m = 1, the
memberships w<sub>ij</sub> converge to 0 or 1, which implies a crisp partitioning. In the absence of
experimentation or domain knowledge, m is commonly set to 2. The basic FCM Algorithm,
given n data points (x1, . . ., xn) to be clustered, a number of c clusters with (c1, . . ., cc) the center of the clusters, and m the level of cluster fuzziness with,
 
== Fuzzy c-means clustering ==
<!-- merged from [[cluster analysis]], not yet cleaned up -->
In fuzzy clustering, every point has a degree of belonging to clusters, as in [[fuzzy logic]], rather than belonging completely to just one cluster. Thus, points on the edge of a cluster, may be ''in the cluster'' to a lesser degree than points in the center of cluster. An overview and comparison of different fuzzy clustering algorithms is available.<ref>Nock, R. and Nielsen, F. (2006) [http://www1.univ-ag.fr/~rnock/Articles/Drafts/tpami06-nn.pdf  "On Weighting Clustering"], IEEE Trans. on Pattern Analysis and Machine Intelligence, 28 (8), 1&ndash;13</ref>
 
Any point ''x'' has a set of coefficients giving the degree of being in the ''k''th cluster ''w''<sub>''k''</sub>(''x''). With fuzzy ''c''-means, the centroid of a cluster is the mean of all points, weighted by their degree of belonging to the cluster:
 
:<math>c_k = {{\sum_x {w_k(x)} ^ {m} x} \over {\sum_x {w_k(x)} ^ {m}}}.</math>
 
The degree of belonging, ''w''<sub>''k''</sub>(''x''), is related inversely to the distance from ''x'' to the cluster center as calculated on the previous pass. It also depends on a parameter ''m'' that controls how much weight is given to the closest center. The fuzzy ''c''-means algorithm is very similar to the [[K-means clustering | ''k''-means algorithm]]:<ref name=Bezdek1981>{{Cite book
| title = Pattern Recognition with Fuzzy Objective Function Algorithms
| year = 1981
| author = Bezdek, James C.
| isbn = 0-306-40671-3
| postscript = <!--None-->
}}</ref>
* [[Determining the number of clusters in a data set|Choose a number of clusters]].
* Assign randomly to each point coefficients for being in the clusters.
* Repeat until the algorithm has converged (that is, the coefficients' change between two iterations is no more than <math>\varepsilon</math>, the given sensitivity threshold) :
** Compute the centroid for each cluster, using the formula above.
** For each point, compute its coefficients of being in the clusters, using the formula above.
 
The algorithm minimizes intra-cluster variance as well, but has the same problems as ''k''-means; the minimum is a local minimum, and the results depend on the initial choice of weights.
 
Using a mixture of Gaussians along with the [[expectation-maximization algorithm]] is a more statistically formalized method which includes some of these ideas: partial membership in classes.
 
Another algorithm closely related to Fuzzy C-Means is [[Soft K-means]].
 
Fuzzy c-means has been a very important tool for image processing in clustering objects in an image. In the 70's, mathematicians introduced the spatial term into the FCM algorithm to improve the accuracy of clustering under noise.<ref name="fuzzy c means">{{Cite journal|url=http://www.cvip.uofl.edu/wwwcvip/research/publications/Pub_Pdf/2002/3.pdf|title=A Modified Fuzzy C-Means Algorithm for Bias Field Estimation and Segmentation of MRI Data|journal=IEEE Transactions on Medical Imaging|volume=21|issue=3|year=2002|pages=193–199|first1=Mohamed N.|last1=Ahmed|first2=Sameh M.|last2=Yamany|first3= Nevin |last3=Mohamed|first4=Aly A.|last4=Farag|first5=Thomas|last5=Moriarty|doi=10.1109/42.996338|pmid=11989844|postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}}}.</ref>
 
==See also==
*[[FLAME Clustering]]
*[[Cluster Analysis]]
*[[Expectation-maximization algorithm]] (a similar, but more statistically formalized method)
 
==References==
{{reflist}}
 
==External links==
*[http://reference.wolfram.com/applications/fuzzylogic/Manual/12.html Fuzzy Clustering in Wolfram Research]
*[http://publishing.eur.nl/ir/repub/asset/57/erimrs20001123094510.pdf ''Extended Fuzzy Clustering Algorithms'' by Kaymak, U. and Setnes, M.]
*[http://codingplayground.blogspot.com/2009/04/fuzzy-clustering.html ''Fuzzy Clustering in C++ and Boost] by Antonio Gulli
* [http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/cmeans.html Concise description with examples ]
 
[[Category:Data clustering algorithms]]

Revision as of 03:39, 3 March 2014

home std test kit The name of the writer is Figures but it's not the most masucline title out there. South Dakota is exactly std home test where I've always been residing. One of the at home std testing extremely fankut.com very best things in the globe for me is to do aerobics and now I'm trying to make cash with it. For years he's been working as a meter reader and it's something he really appreciate.

my web page: std testing at home (This Webpage)