Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collaboration proposal: Wrapping around the FairBench library's metric definitions #535

Open
maniospas opened this issue Jun 12, 2024 · 0 comments

Comments

@maniospas
Copy link

Hello,
I'm from the MAMMOth project that works on multi-attribute multi-modal bias mitigation. In the course of that project we are developing FairBench: a library that presents building blocks that can be combined to construct and run a wide range of fairness metrics (while also tracking underlying computations), especially for multiple multi-value sensitive attributes and intersectionality considerations. We published a paper describing an organized process of building bias/fairness metrics here: https://arxiv.org/pdf/2405.19022

Last year, we had a call with @hoffmansc where we briefly mentioned the possibility of integrating parts of FairBench into AIF360 to supplement the latter's bias/fairness assessment capabilities. I am opening this issue as a follow-up thanks to achieving an acceptable level of maturity in our work.

As far as I can tell, AIF360 already has the concept of ratio and difference comparisons to serve as building blocks for many of its computations, but I believe that it would benefit from our principled fairness exploration capabilities that consider more building blocks and -importantly- are extensible to future building blocks instead of needing to hard-code their usage.

Rough proposal

I am proposing to create customize-able AIF360 metrics (perhaps start first with one extending BinaryLabelDatasetMetric, though FairBench also covers ranking and regression - with more tasks planned for the future). The new metrics would depend on the FairBench library. (If you want to prevent installation bloat, FairBench would not necessarily need to be a main dependency of AIF360, but only installed by users if they actually want to call these new metrics.)

I am thinking that the new metrics could be similar to AIF360's ClassificationMetric with the difference that:
a) __getattr__ would be overloaded so that, based on the method that users try to call a method whose name follows a standardized name convention, an appropriate fairness measure is generated. There are hundreds of valid combinations. For example, upon calling metric.intersectional_accuracy_min_ratio(...) an appropriate bias assessment could be performed.
b) Outcome values would be fairbench.Explainable objects. These can be used as floats in computations normally, but also allow backtracking previous computations with an .explain field.

Links

Docs: https://fairbench.readthedocs.io/
Github: https://github.com/mever-team/FairBench

I hope that this is interesting for your team. At your disposal for further discussions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant