Skip to content

Latest commit

 

History

History
70 lines (47 loc) · 2.49 KB

File metadata and controls

70 lines (47 loc) · 2.49 KB

AutoFair Explainability Frameworks

This repository contains two explainability methods developed for the AutoFair project:

  • FACTS: Fairness-Aware Counterfactuals for Subgroups — a model-agnostic, highly parameterizable framework for auditing subgroup fairness through counterfactual explanations.
  • GLANCE: Global Actions in a Nutshell for Counterfactual Explainability — a versatile and adaptive framework for generating global counterfactual explanations.
  • LiCE: Likely Counterfactual Explanations — a method of finding high-quality plausible counterfactual explanations.
  • FCX: Feasible Counterfactual Explanations - a novel framework that generates realistic and low-cost counterfactuals by enforcing both hard feasibility constraints provided by domain experts and soft causal constraints inferred from data.

Documentation

Full API documentation is available at: https://humancompatible-explain.readthedocs.io/en/latest/index.html


Project Structure

The humancompatible/explain/ folder contains the corresponding code for the implemented methods.


Setup Instructions

We recommend using Anaconda or Python virtual environments to avoid package conflicts.

1. Clone the repository

git clone https://github.com/humancompatible/explain.git
cd explain

2. Create and activate a virtual enviroment

Using Conda:

conda create --name explain python=3.10.4
conda activate explain

Using by using Python venv:

python3 -m venv env
source env/bin/activate

3. Install required dependencies

pip install -e .

4. (Optional) Jupyter setup for notebooks

python -m ipykernel install --user --name=autofair --display-name "AutoFair Env"
jupyter notebook

Example notebooks

Explore the functionality through example notebooks in the examples/ directory:

These notebooks offer adjustable parameters and serve as entry points for integrating your own models or datasets.