Home | Trees | Indices | Help |
|
---|
|
Filter the input data through the most significatives of its principal components. **Internal variables of interest** ``self.avg`` Mean of the input data (available after training). ``self.v`` Transposed of the projection matrix (available after training). ``self.d`` Variance corresponding to the PCA components (eigenvalues of the covariance matrix). ``self.explained_variance`` When output_dim has been specified as a fraction of the total variance, this is the fraction of the total variance that is actually explained. More information about Principal Component Analysis, a.k.a. discrete Karhunen-Loeve transform can be found among others in I.T. Jolliffe, Principal Component Analysis, Springer-Verlag (1986).
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|
|||
Inherited from Node | |||
---|---|---|---|
|
|||
|
|
|||
Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
_train_seq List of tuples:: |
|||
dtype dtype |
|||
input_dim Input dimensions |
|||
output_dim Output dimensions |
|||
supported_dtypes Supported dtypes |
|
The number of principal components to be kept can be specified as 'output_dim' directly (e.g. 'output_dim=10' means 10 components are kept) or by the fraction of variance to be explained (e.g. 'output_dim=0.95' means that as many components as necessary will be kept in order to explain 95% of the input variance). Other Keyword Arguments: svd -- if True use Singular Value Decomposition instead of the standard eigenvalue problem solver. Use it when PCANode complains about singular covariance matrices reduce -- Keep only those principal components which have a variance larger than 'var_abs' and a variance relative to the first principal component larger than 'var_rel' and a variance relative to total variance larger than 'var_part' (set var_part to None or 0 for no filtering). Note: when the 'reduce' switch is enabled, the actual number of principal components (self.output_dim) may be different from that set when creating the instance.
|
Return the eigenvector range and set the output dim if required. This is used if the output dimensions is smaller than the input dimension (so only the larger eigenvectors have to be kept). |
|
Project the input on the first 'n' principal components. If 'n' is not set, use all available components.
|
Project 'y' to the input space using the first 'n' components. If 'n' is not set, use all available components.
|
|
Stop the training phase. Keyword arguments: debug=True if stop_training fails because of singular cov matrices, the singular matrices itselves are stored in self.cov_mtx and self.dcov_mtx to be examined.
|
|
Project the input on the first 'n' principal components. If 'n' is not set, use all available components.
|
Return the fraction of the original variance that can be explained by self._output_dim PCA components. If for example output_dim has been set to 0.95, the explained variance could be something like 0.958... Note that if output_dim was explicitly set to be a fixed number of components, there is no way to calculate the explained variance. |
Return the projection matrix. |
Return the back-projection matrix (i.e. the reconstruction matrix). |
Project 'y' to the input space using the first 'n' components. If 'n' is not set, use all available components.
|
Stop the training phase. Keyword arguments: debug=True if stop_training fails because of singular cov matrices, the singular matrices itselves are stored in self.cov_mtx and self.dcov_mtx to be examined.
|
Update the internal structures according to the input data `x`. `x` is a matrix having different variables on different columns and observations on the rows. By default, subclasses should overwrite `_train` to implement their training phase. The docstring of the `_train` method overwrites this docstring. Note: a subclass supporting multiple training phases should implement the *same* signature for all the training phases and document the meaning of the arguments in the `_train` method doc-string. Having consistent signatures is a requirement to use the node in a flow.
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Thu Mar 10 15:28:22 2016 | http://epydoc.sourceforge.net |