MIRROR-Eval is an evaluation framework for MIRROR models.
Using conda:
# Clone the repository
git clone https://github.com/DRAGNLabs/MIRROR-Eval.git
cd MIRROR-Eval
# Create and activate conda environment
conda env create -f environment.yml
conda activate mirror-eval
pip install .
# If you're doing development:
pip install -e ".[dev]"
To build the package as a distributable wheel:
# Install build tools
pip install build
# Build the package
python -m buildThis will create distribution files in the dist/ directory:
- A wheel file (
.whl) for binary distribution - A source distribution (
.tar.gz)
pip install dist/mirror_eval-0.1.0-py3-none-any.whlThe primary entrypoint for MIRROR-Eval is the evaluate function:
from mirror_eval import evaluate
# Run the evaluation pipeline
results = evaluate()
print(results)MIRROR-Eval/
├── src/
│ └── mirror_eval/
│ ├── __init__.py
│ └── evaluate.py
├── tests/
├── pyproject.toml
├── environment.yml
├── requirements-dev.txt
├── README.md
└── LICENSE
pytestWith coverage:
pytest --cov=mirror_eval --cov-report=htmlFormat code with Black:
black src/Check code style with flake8:
flake8 src/Type checking with mypy:
mypy src/Set the HuggingFace cache on your machine:
export HF_HOME="/path/to/cache/dir"This project is licensed under the Apache License 2.0 - see the LICENSE file for details.