Python implementation of the Adaptive Mixture ICA algorithm, based on the original Fortran implementation. This implementation is designed to be more user-friendly and easier to integrate with other Python libraries.
NOTE: This is a work in progress and may not be fully functional yet. User should not rely on this implementation for any research or production purposes, as the results may not be accurate or reliable.
AMICA (Adaptive Mixture ICA) is an advanced blind source separation algorithm that uses adaptive mixtures of independent component analyzers. This implementation provides:
- Multiple source models
- Different PDF types
- Newton optimization
- Component sharing
- Outlier rejection
- Data preprocessing (mean removal, sphering)
# Clone the repository
git clone https://github.com/neuromechanist/pyAMICA.git
cd pyAMICA
# install the package
pip install -e .
- Create a parameter file (e.g., params.json):
{
"files": ["data1.bin", "data2.bin"],
"num_samples": [100, 100],
"data_dim": 64,
"field_dim": [1000, 1000],
"num_models": 1,
"num_mix": 3,
"max_iter": 2000
}
- Run AMICA:
python amica_cli.py params.json --outdir results
files
: List of binary data filesnum_samples
: Number of samples per filedata_dim
: Number of channels/dimensionsfield_dim
: Number of samples per field for each file
num_models
: Number of models (default: 1)num_mix
: Number of mixture components (default: 3)num_comps
: Number of components (-1 for data_dim * num_models)pdftype
: PDF type (1: Generalized Gaussian, 2: Logistic, etc.)
max_iter
: Maximum iterations (default: 2000)lrate
: Initial learning rate (default: 0.1)minlrate
: Minimum learning rate (default: 1e-12)lratefact
: Learning rate decay factor (default: 0.5)
do_newton
: Use Newton optimization (default: false)newt_start
: Iteration to start Newton (default: 20)newt_ramp
: Newton ramp length (default: 10)newtrate
: Newton learning rate (default: 0.5)
share_comps
: Enable component sharing (default: false)comp_thresh
: Component correlation threshold (default: 0.99)share_start
: Iteration to start sharing (default: 100)share_int
: Sharing interval (default: 100)
do_mean
: Remove mean (default: true)do_sphere
: Perform sphering (default: true)do_approx_sphere
: Use approximate sphering (default: true)pcakeep
: Number of PCA components to keep (optional)pcadb
: dB threshold for PCA components (optional)
do_opt_block
: Optimize block size (default: true)block_size
: Initial block size (default: 128)blk_min
: Minimum block size (default: 128)blk_max
: Maximum block size (default: 1024)blk_step
: Block size step (default: 128)
do_reject
: Enable outlier rejection (default: false)rejsig
: Rejection threshold in std (default: 3.0)rejstart
: Iteration to start rejection (default: 2)rejint
: Rejection interval (default: 3)maxrej
: Maximum rejections (default: 1)
do_history
: Save optimization history (default: false)histstep
: History saving interval (default: 10)writestep
: Result writing interval (default: 100)
amica.py
: Main AMICA implementationamica_pdf.py
: PDF type implementationsamica_newton.py
: Newton optimizationamica_data.py
: Data loading/preprocessingamica_cli.py
: Command-line interfaceparams.json
: Example parameter file
Results are saved in NumPy format:
A.npy
: Mixing matrixW.npy
: Unmixing matricesc.npy
: Bias termsmu.npy
: Component meansalpha.npy
: Mixture weightsbeta.npy
: Scale parametersrho.npy
: Shape parametersgm.npy
: Model weightsmean.npy
: Data meansphere.npy
: Sphering matrixcomp_list.npy
: Component assignmentsll.npy
: Log likelihood historynd.npy
: Gradient norm history (if use_grad_norm=true)
- Palmer, J. A., Kreutz-Delgado, K., & Makeig, S. (2012). AMICA: An adaptive mixture of independent component analyzers with shared components.
This project is licensed under the MIT License - see the LICENSE file for details.