Skip to content

mever-team/FairBench

Repository files navigation

FairBench

build coverage Documentation Status Code style: black Contributor Covenant

A comprehensive AI fairness exploration framework.
🧱 Build measures from simpler blocks
📈 Fairness reports and stamps
⚖️ Multivalue multiattribute
🧪 Backtrack,filter, and reorganize computations
🖥️ ML compatible: numpy,pandas,torch,tensorflow,jax

FairBench strives to be compatible with the latest Python release, but compatibility delays of third-party ML libraries usually mean that only the language's previous release is tested and stable (currently 3.12).

Quick measure

import fairbench as fb

x, y, yhat = fb.bench.tabular.compas(test_size=0.5, predict="probabilities")
sensitive = fb.Dimensions(fb.categories @ x["race"])

# more than 300 standardized measures generated by name and packed into a report
abroca = fb.quick.pairwise_maxbarea_auc(scores=yhat, labels=y, sensitive=sensitive)
print(abroca.float())
abroca.roc.show()

docs/simplest.png

Full report

import fairbench as fb

x, y, yhat = fb.bench.tabular.compas(test_size=0.5)

sensitive = fb.Dimensions(fb.categories @ x["sex"], fb.categories @ x["race"])
sensitive = sensitive.intersectional().strict()
report = fb.reports.pairwise(predictions=yhat, labels=y, sensitive=sensitive)
report.filter(fb.investigate.Stamps).show(env=fb.export.Html(horizontal=True), depth=1)

docs/stamps.png

Attributions

@article{krasanakis2024standardizing,
      title={Towards Standardizing AI Bias Exploration}, 
      author={Emmanouil Krasanakis and Symeon Papadopoulos},
      year={2024},
      eprint={2405.19022},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Project: MAMMOth
Maintainer: Emmanouil (Manios) Krasanakis ([email protected])
License: Apache 2.0
Contributors: Giannis Sarridis

This project includes modified code originally licensed under the MIT License: