遵循以下最佳实践的项目将能够自愿的自我认证,并显示他们已经实现了核心基础设施计划(OpenSSF)徽章。 显示详细资料
[](https://www.bestpractices.dev/projects/5090)
<a href="https://www.bestpractices.dev/projects/5090"><img src="https://www.bestpractices.dev/projects/5090/badge"></a>
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
We think our bus factor is at least 5 because we have 5 maintainers with equivalent top-level access rights to the repository and knowledge of the project. https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/MAINTAINERS.md
Unassociated significant contributing organisations are listed in the AUTHORS file: https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/AUTHORS
The MIT License statement is included in all source files and contains a statement for the copyright holder "# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2020".
The MIT License statement is included in all source files.
Repository on GitHub, which uses git. git is distributed.
Issues like small tasks well suited for new or casual contributors are assigned the label "good first issue": https://github.com/Trusted-AI/adversarial-robustness-toolbox/labels
2FA is required
We are using Github's 2FA based on TOTP.
https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Code-Reviews
All proposed modifications are reviewed independently if it is a worthwhile modification and free of known issues.
not applicable
The test suite can be invoked with the top-level run_tests.sh script or a selection of test module can be run using pytest: https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/run_tests.sh
run_tests.sh
pytest
The project uses and required passing continuous integrations for all pull request and changes: https://github.com/Trusted-AI/adversarial-robustness-toolbox/actions
https://app.codecov.io/gh/Trusted-AI/adversarial-robustness-toolbox/
Codecov does currently not report branch coverage for this project's languages.
Found all required security hardening headers. https://github.com/Trusted-AI/adversarial-robustness-toolbox
The project is continuously reviewing security.
These types of hardening mechanisms do not apply for this project because it does not access the web or compile code.
https://app.codecov.io/gh/Trusted-AI/adversarial-robustness-toolbox, https://en.wikipedia.org/wiki/Dynamic_program_analysis
Yes, run-time assertions are encouraged, applied and tested.
后退