遵循以下最佳实践的项目将能够自愿的自我认证,并显示他们已经实现了核心基础设施计划(OpenSSF)徽章。 显示详细资料
[](https://www.bestpractices.dev/projects/7448)
<a href="https://www.bestpractices.dev/projects/7448"><img src="https://www.bestpractices.dev/projects/7448/badge"></a>
Intersectional Fairness (ISF) is a bias detection and mitigation technology for intersectional bias, which combinations of multiple protected attributes cause. ISF leverages the existing single-attribute bias mitigation methods to make a machine-learning model fair regarding intersectional bias. Approaches applicable to ISF are pre-, in-, and post-processing. For now, ISF supports Adversarial Debiasing, Equalized Odds, Massaging, and Reject Option Classification.
Because we use github.com for the repository. https://github.com/intersectional-fairness/isf
We use a unittest in Python standard packages.
警告:需要URL,但找不到URL。
Found all required security hardening headers.
We can use assert statement in the Python language specification.
后退