how ICA algorithm works in a secure manner?
by barkkathulla[ Edit ] 2012-09-19 11:31:47
Independent Component Analysis
1) Minimization of Mutual Information
2) Maximization of non-Gaussianity
The Non-Gaussianity family of ICA algorithms, motivated by the central limit theorem, uses kurtosis and negentropy. The Minimization-of-Mutual information (MMI) family of ICA algorithms uses measures like Kullback-Leibler Divergence and maximum-entropy.
Typical algorithms for ICA use centering, whitening (usually with the Eigen value decomposition), and dimensionality reduction as preprocessing steps in order to simplify and reduce the complexity of the problem for the actual iterative algorithm. Whitening and dimension reduction can be achieved with principal component analysis or singular value decomposition. Whitening ensures that all dimensions are treated equally a priori before the algorithm is run. Algorithms for ICA include infomax, FastICA, and JADE, but there are many others also.
In general, ICA cannot identify the actual number of source signals, a uniquely correct ordering of the source signals, nor the proper scaling (including sign) of the source signals.
ICA is important to blind signal separation and has many practical applications. It is closely related to (or even a special case of) the search for a factorial code of the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent.