Skip to content

adaptive-machine-learning/moa

 
 

Repository files navigation

MOA (CapyMOA Edition)

This is a fork of the Waikato/moa repository that is packaged with CapyMOA providing additional functionality to MOA and better integration with CapyMOA.

A long term project goal is to merge all CapyMOA changes back into the upstream Waikato/moa:master branch. This will be an iterative process and much slower than CapyMOA development. For now, this fork exists to facilitate CapyMOA development without blocking on upstream MOA changes. Changes from this branch shall be cherry-picked into upstream MOA as appropriate. This fork shall then be rebased on-top of the latest MOA upstream changes periodically.

To create a pull request into MOA CapyMOA edition branch, create a PR from your feature branch into adaptive-machine-learning/moa:capymoa: CapyMOA

To build MOA for use with CapyMOA run:

cd moa
mvn package -DskipTests -Dmaven.javadoc.skip=true -Dlatex.skipBuild=true

This will create a target/moa-*-jar-with-dependencies.jar file that can be used by CapyMOA. To let CapyMOA know where this file is, set the CAPYMOA_MOA_JAR environment variable to the path of this file.

You can do this temporarily in your terminal session with:

export CAPYMOA_MOA_JAR=/path/to/moa/target/moa-*-jar-with-dependencies.jar

To check that CapyMOA can find MOA, run:

python -c "import capymoa; capymoa.about()"
# CapyMOA 0.10.0
#   CAPYMOA_DATASETS_DIR: .../datasets
#   CAPYMOA_MOA_JAR:      .../moa/moa/target/moa-2024.07.2-SNAPSHOT-jar-with-dependencies.jar
#   CAPYMOA_JVM_ARGS:     ['-Xmx8g', '-Xss10M']
#   JAVA_HOME:            /usr/lib/jvm/java-21-openjdk
#   MOA version:          aa955ebbcbd99e9e1d19ab16582e3e5a6fca5801ba250e4d164c16a89cf798ea
#   JAVA version:         21.0.7

MOA (Massive Online Analysis)

Build Status Maven Central DockerHub License: GPL v3

MOA

MOA is the most popular open source framework for data stream mining, with a very active growing community (blog). It includes a collection of machine learning algorithms (classification, regression, clustering, outlier detection, concept drift detection and recommender systems) and tools for evaluation. Related to the WEKA project, MOA is also written in Java, while scaling to more demanding problems.

http://moa.cms.waikato.ac.nz/

Using MOA

MOA performs BIG DATA stream mining in real time, and large scale machine learning. MOA can be extended with new mining algorithms, and new stream generators or evaluation measures. The goal is to provide a benchmark suite for the stream mining community.

Mailing lists

Citing MOA

If you want to refer to MOA in a publication, please cite the following JMLR paper:

Albert Bifet, Geoff Holmes, Richard Kirkby, Bernhard Pfahringer (2010); MOA: Massive Online Analysis; Journal of Machine Learning Research 11: 1601-1604

About

CapyMOA's Fork of MOA.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 84.5%
  • TeX 15.4%
  • Other 0.1%